Tugger the SLUGger!SLUG Mailing List Archives

Re: [SLUG] enter root password or type ctrl-D to continue...


Hiyas,  I figured I'd send a bit more info.  Ubuntu set this up for me.  The
grub entry for this kernel image is pointing to /dev/md1 for root.  I am
thinking this is more of a process that is hanging around maybe after the kill
is sent out?  Do binary drivers have this sort of problem?  I have both vmware,
and nvidia stuff.

A difference in below are the device nodes.  md0, and md2 seem to be using the
evms, which is good, but md1 is just using the regular device nodes.  Could this
be a problem with maybe udev? or something similar?

# mdadm -D /dev/md0
/dev/md0:
        Version : 00.90.01
  Creation Time : Tue Sep  6 10:13:12 2005
     Raid Level : raid1
     Array Size : 248896 (243.06 MiB 254.87 MB)
    Device Size : 248896 (243.06 MiB 254.87 MB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Tue Nov 15 12:06:00 2005
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           UUID : 51048453:23b76747:b05e0d5c:b2401395
         Events : 0.1779

    Number   Major   Minor   RaidDevice State
       0     253        1        0      active sync   /dev/evms/.nodes/sdb1
       1     253        2        1      active sync   /dev/evms/.nodes/sda1
# mdadm -D /dev/md2
/dev/md2:
        Version : 00.90.01
  Creation Time : Tue Sep  6 10:14:27 2005
     Raid Level : raid1
     Array Size : 225391872 (214.95 GiB 230.80 GB)
    Device Size : 225391872 (214.95 GiB 230.80 GB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 2
    Persistence : Superblock is persistent

    Update Time : Tue Nov 15 12:12:50 2005
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           UUID : abbe13b4:2ed24182:5de0de15:05441cf7
         Events : 0.1151320

    Number   Major   Minor   RaidDevice State
       0     253        3        0      active sync   /dev/evms/.nodes/sdb4
       1     253        4        1      active sync   /dev/evms/.nodes/sda4
# mdadm -D /dev/md1
/dev/md1:
        Version : 00.90.01
  Creation Time : Tue Sep  6 10:13:27 2005
     Raid Level : raid1
     Array Size : 14651200 (13.97 GiB 15.00 GB)
    Device Size : 14651200 (13.97 GiB 15.00 GB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 1
    Persistence : Superblock is persistent

    Update Time : Tue Nov 15 12:13:05 2005
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           UUID : 0bc06dae:97cc2fa9:94ce0a4f:82fa8a70
         Events : 0.1381985

    Number   Major   Minor   RaidDevice State
       0       8       19        0      active sync   /dev/sdb3
       1       8        3        1      active sync   /dev/sda3


Thanks again.



On Tue, 15 Nov 2005 11:39:28 +1100
unauthorized@xxxxxxxxxxxxxxxx wrote:

> Hiyas,
> 
>   I've been ignoring for the past few months a problem with my system whenever
> it reboots.  Which happens to be quite often actually (dual boot).  I have 3
> partitions which are raid-1 (md0, md1, md2), and ext3 fs types.
> 
> md0 = /boot
> md1 = /
> md2 = other
> 
> For some reason md1 is busy on the reboot, so never gets turned into a
> read-only like the other 2, but md0, and md2 do go read-only.  Now the problem
> I have is that when booting into linux I find it asks me for the root
> password, or to type ctrl-D to continue on into run level 3.  I have searched
> and search through the logs, and well I can't find any type of error.  The
> only thing I can think of is that md1 doesn't go read-only when
> rebooting/shutting down.  I just didn't think this was the behaviour of an
> ext3 fs.
> 
> Can someone point me in the right direction as where to look so I can fix
> this?
> 
> Thanks.
> -- 
> SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
> Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html