Hi all,
I have a server that already has Ubuntu 16.04 installed on a RAID 1 device (two SSDs, software RAID). When I try to run the latest installer (Qlustar 10.1.1-3) from a live dvd, it fails to boot with the following error:
No supported filesystem images found at /live
Looking in the boot.log, I see
error reading /lib/udev/hwdb.bin: No such file or directory
Do think that is a result of the drives being md devices? The initramfs kernel sees /dev/md* and /dev/md12[67]
Thanks,
Ian
Hi,
Do you have /live directory in your Ubuntu 16.04 installation ? If you have, then rename temporary. The way qlustar installer is booting by looking to /live directory on all storage devices and using the first found.
Regards, Rolandas
On 20/05/2019 23.22, Ian Kaufman wrote:
Hi all,
I have a server that already has Ubuntu 16.04 installed on a RAID 1 device (two SSDs, software RAID). When I try to run the latest installer (Qlustar 10.1.1-3) from a live dvd, it fails to boot with the following error:
No supported filesystem images found at /live
Looking in the boot.log, I see
error reading /lib/udev/hwdb.bin: No such file or directory
Do think that is a result of the drives being md devices? The initramfs kernel sees /dev/md* and /dev/md12[67]
Thanks,
Ian
-- Ian Kaufman Research Systems Administrator UC San Diego, Jacobs School of Engineering ikaufman AT ucsd DOT edu
Qlustar-General mailing list -- qlustar-general@qlustar.org To unsubscribe send an email to qlustar-general-leave@qlustar.org
Hi Rolandas,
I don't believe so, but I will verify tomorrow.
Thanks,
Ian
On Mon, May 20, 2019, 9:17 PM Rolandas rolnas@gmail.com wrote:
Hi,
Do you have /live directory in your Ubuntu 16.04 installation ? If you have, then rename temporary. The way qlustar installer is booting by looking to /live directory on all storage devices and using the first found.
Regards, Rolandas
On 20/05/2019 23.22, Ian Kaufman wrote:
Hi all,
I have a server that already has Ubuntu 16.04 installed on a RAID 1 device (two SSDs, software RAID). When I try to run the latest installer (Qlustar 10.1.1-3) from a live dvd, it fails to boot with the following error:
No supported filesystem images found at /live
Looking in the boot.log, I see
error reading /lib/udev/hwdb.bin: No such file or directory
Do think that is a result of the drives being md devices? The initramfs kernel sees /dev/md* and /dev/md12[67]
Thanks,
Ian
-- Ian Kaufman Research Systems Administrator UC San Diego, Jacobs School of Engineering ikaufman AT ucsd DOT edu
Qlustar-General mailing list -- qlustar-general@qlustar.org To unsubscribe send an email to qlustar-general-leave@qlustar.org
Nope - no /live on the installed system ...
On Tue, May 21, 2019 at 8:01 AM Ian Kaufman ikaufman@eng.ucsd.edu wrote:
Hi Rolandas,
I don't believe so, but I will verify tomorrow.
Thanks,
Ian
On Mon, May 20, 2019, 9:17 PM Rolandas rolnas@gmail.com wrote:
Hi,
Do you have /live directory in your Ubuntu 16.04 installation ? If you have, then rename temporary. The way qlustar installer is booting by looking to /live directory on all storage devices and using the first found.
Regards, Rolandas
On 20/05/2019 23.22, Ian Kaufman wrote:
Hi all,
I have a server that already has Ubuntu 16.04 installed on a RAID 1 device (two SSDs, software RAID). When I try to run the latest
installer
(Qlustar 10.1.1-3) from a live dvd, it fails to boot with the following error:
No supported filesystem images found at /live
Looking in the boot.log, I see
error reading /lib/udev/hwdb.bin: No such file or directory
Do think that is a result of the drives being md devices? The initramfs kernel sees /dev/md* and /dev/md12[67]
Thanks,
Ian
-- Ian Kaufman Research Systems Administrator UC San Diego, Jacobs School of Engineering ikaufman AT ucsd DOT edu
Qlustar-General mailing list -- qlustar-general@qlustar.org To unsubscribe send an email to qlustar-general-leave@qlustar.org
Hi,
In this case only corrupted dvd image or bad burning :-( We used DVD/CD many years ago last time ? We use USB flash drive instead of DVD/CD for that or network booting (it was not difficult to make qlustar installer boot via network with faiserver install).
Regards Rolandas
On 22/05/2019 21.22, Ian Kaufman wrote:
Nope - no /live on the installed system ...
On Tue, May 21, 2019 at 8:01 AM Ian Kaufman <ikaufman@eng.ucsd.edu mailto:ikaufman@eng.ucsd.edu> wrote:
Hi Rolandas, I don't believe so, but I will verify tomorrow. Thanks, Ian On Mon, May 20, 2019, 9:17 PM Rolandas <rolnas@gmail.com <mailto:rolnas@gmail.com>> wrote: Hi, Do you have /live directory in your Ubuntu 16.04 installation ? If you have, then rename temporary. The way qlustar installer is booting by looking to /live directory on all storage devices and using the first found. Regards, Rolandas On 20/05/2019 23.22, Ian Kaufman wrote: > Hi all, > > I have a server that already has Ubuntu 16.04 installed on a RAID 1 > device (two SSDs, software RAID). When I try to run the latest installer > (Qlustar 10.1.1-3) from a live dvd, it fails to boot with the following > error: > > No supported filesystem images found at /live > > Looking in the boot.log, I see > > error reading /lib/udev/hwdb.bin: No such file or directory > > Do think that is a result of the drives being md devices? The initramfs > kernel sees /dev/md* and /dev/md12[67] > > Thanks, > > Ian > > -- > Ian Kaufman > Research Systems Administrator > UC San Diego, Jacobs School of Engineering ikaufman AT ucsd DOT edu > > _______________________________________________ > Qlustar-General mailing list -- qlustar-general@qlustar.org <mailto:qlustar-general@qlustar.org> > To unsubscribe send an email to qlustar-general-leave@qlustar.org <mailto:qlustar-general-leave@qlustar.org> >
-- Ian Kaufman Research Systems Administrator UC San Diego, Jacobs School of Engineering ikaufman AT ucsd DOT edu
Hmm, I tried multiple iterations, 10.1.1-2 and 10.1.1-3 - odd that I would get corrupted ISOs during bruns both times.
I verified checksums of the gz file as well, to verify that the download was correct.
I'll try the USB flash drive approach and see if I have better luck.
Thanks,
Ian
On Wed, May 22, 2019 at 11:29 AM Rolandas rolnas@gmail.com wrote:
Hi,
In this case only corrupted dvd image or bad burning :-( We used DVD/CD many years ago last time ? We use USB flash drive instead of DVD/CD for that or network booting (it was not difficult to make qlustar installer boot via network with faiserver install).
Regards Rolandas
On 22/05/2019 21.22, Ian Kaufman wrote:
Nope - no /live on the installed system ...
On Tue, May 21, 2019 at 8:01 AM Ian Kaufman <ikaufman@eng.ucsd.edu mailto:ikaufman@eng.ucsd.edu> wrote:
Hi Rolandas, I don't believe so, but I will verify tomorrow. Thanks, Ian On Mon, May 20, 2019, 9:17 PM Rolandas <rolnas@gmail.com <mailto:rolnas@gmail.com>> wrote: Hi, Do you have /live directory in your Ubuntu 16.04 installation ? If you have, then rename temporary. The way qlustar installer is booting by looking to /live directory on all storage devices and using the first found. Regards, Rolandas On 20/05/2019 23.22, Ian Kaufman wrote: > Hi all, > > I have a server that already has Ubuntu 16.04 installed on a RAID 1 > device (two SSDs, software RAID). When I try to run the latest installer > (Qlustar 10.1.1-3) from a live dvd, it fails to boot with the following > error: > > No supported filesystem images found at /live > > Looking in the boot.log, I see > > error reading /lib/udev/hwdb.bin: No such file or directory > > Do think that is a result of the drives being md devices? The initramfs > kernel sees /dev/md* and /dev/md12[67] > > Thanks, > > Ian > > -- > Ian Kaufman > Research Systems Administrator > UC San Diego, Jacobs School of Engineering ikaufman AT ucsd DOT edu > > _______________________________________________ > Qlustar-General mailing list -- qlustar-general@qlustar.org <mailto:qlustar-general@qlustar.org> > To unsubscribe send an email to qlustar-general-leave@qlustar.org <mailto:qlustar-general-leave@qlustar.org> >
-- Ian Kaufman Research Systems Administrator UC San Diego, Jacobs School of Engineering ikaufman AT ucsd DOT edu
"I" == Ian Kaufman ikaufman@eng.ucsd.edu writes:
Hi Ian,
I> Nope - no /live on the installed system
I've had situations before where the installer barked when an old software RAID was present. Assuming that you really want to perform the install (which will wipe the disks anyway) you can try to just brute-force wipe the FS and RAID info before starting the installer. For that boot into your old system and do:
for d in a b; do # Assume a mirror of /dev/sd[ab]] dd if=/dev/zero of=/dev/sd$d bs=1M count=1024 done
Obviously, you won't be able to shutdown anymore after this, so you'll have to reset the machine for a reboot with the installer medium present.
If that still doesn't help, you probably really have a corrupted installer medium.
Good luck,
Roland
Thanks Roland,
That is my gut feeling. I'll give it a try and let you know.
Ian
On Wed, May 22, 2019 at 11:44 AM Roland Fehrenbacher rf@q-leap.de wrote:
"I" == Ian Kaufman ikaufman@eng.ucsd.edu writes:
Hi Ian,
I> Nope - no /live on the installed system
I've had situations before where the installer barked when an old software RAID was present. Assuming that you really want to perform the install (which will wipe the disks anyway) you can try to just brute-force wipe the FS and RAID info before starting the installer. For that boot into your old system and do:
for d in a b; do # Assume a mirror of /dev/sd[ab]] dd if=/dev/zero of=/dev/sd$d bs=1M count=1024 done
Obviously, you won't be able to shutdown anymore after this, so you'll have to reset the machine for a reboot with the installer medium present.
If that still doesn't help, you probably really have a corrupted installer medium.
Good luck,
Roland _______________________________________________ Qlustar-General mailing list -- qlustar-general@qlustar.org To unsubscribe send an email to qlustar-general-leave@qlustar.org
Hi,
On 22/05/2019 21.43, Roland Fehrenbacher wrote:
"I" == Ian Kaufman ikaufman@eng.ucsd.edu writes:
Hi Ian,
I> Nope - no /live on the installed system
I've had situations before where the installer barked when an old software RAID was present. Assuming that you really want to perform the install (which will wipe the disks anyway) you can try to just brute-force wipe the FS and RAID info before starting the installer. For that boot into your old system and do:
for d in a b; do # Assume a mirror of /dev/sd[ab]] dd if=/dev/zero of=/dev/sd$d bs=1M count=1024 done
wipefs, if exists is better alternative. It will wipe only patterns, by wich blkid and other recognise structures on disks. And after you can still reboot at least once.
Regards Rolandas
Obviously, you won't be able to shutdown anymore after this, so you'll have to reset the machine for a reboot with the installer medium present.
If that still doesn't help, you probably really have a corrupted installer medium.
Good luck,
Roland _______________________________________________ Qlustar-General mailing list -- qlustar-general@qlustar.org To unsubscribe send an email to qlustar-general-leave@qlustar.org
Yep, that was it ... the installer didn't like the existing sw RAID. And using dd I was able to reboot from the cli.
One feature request ... allow the end user to pick their own private IP space. I prefer using 192.168.X.X as campus here uses 172 and 10 around campus.
Ian
On Wed, May 22, 2019, 11:45 AM Ian Kaufman ikaufman@eng.ucsd.edu wrote:
Thanks Roland,
That is my gut feeling. I'll give it a try and let you know.
Ian
On Wed, May 22, 2019 at 11:44 AM Roland Fehrenbacher rf@q-leap.de wrote:
> "I" == Ian Kaufman ikaufman@eng.ucsd.edu writes:
Hi Ian,
I> Nope - no /live on the installed system
I've had situations before where the installer barked when an old software RAID was present. Assuming that you really want to perform the install (which will wipe the disks anyway) you can try to just brute-force wipe the FS and RAID info before starting the installer. For that boot into your old system and do:
for d in a b; do # Assume a mirror of /dev/sd[ab]] dd if=/dev/zero of=/dev/sd$d bs=1M count=1024 done
Obviously, you won't be able to shutdown anymore after this, so you'll have to reset the machine for a reboot with the installer medium present.
If that still doesn't help, you probably really have a corrupted installer medium.
Good luck,
Roland _______________________________________________ Qlustar-General mailing list -- qlustar-general@qlustar.org To unsubscribe send an email to qlustar-general-leave@qlustar.org
-- Ian Kaufman Research Systems Administrator UC San Diego, Jacobs School of Engineering ikaufman AT ucsd DOT edu
"I" == Ian Kaufman ikaufman@eng.ucsd.edu writes:
I> Yep, that was it ... the installer didn't like the existing sw I> RAID. And using dd I was able to reboot from the cli.
Good news, glad it worked.
I> One feature request ... allow the end user to pick their own I> private IP space. I prefer using 192.168.X.X as campus here uses I> 172 and 10 around campus.
You can already choose that freely. The suggested and initially displayed private network IP space is just determined by the number of compute nodes you select (this number is otherwise irrelevant) but can be changed to a different one in the 'IP address' field below 'Cluster network adapter'. This should also be explained when you press F1.
Best,
Roland
I> On Wed, May 22, 2019, 11:45 AM Ian Kaufman I> ikaufman@eng.ucsd.edu wrote:
I> Thanks Roland,
I> That is my gut feeling. I'll give it a try and let you know.
I> Ian
I> On Wed, May 22, 2019 at 11:44 AM Roland Fehrenbacher I> rf@q-leap.de wrote:
>>>>>> "I" == Ian Kaufman ikaufman@eng.ucsd.edu writes:
I> Hi Ian,
I> I> Nope - no /live on the installed system
I> I've had situations before where the installer barked I> when an old software RAID was present. Assuming that you I> really want to perform the install (which will wipe the I> disks anyway) you can try to just brute-force wipe the FS I> and RAID info before starting the installer. For that I> boot into your old system and do:
I> for d in a b; do # Assume a mirror of /dev/sd[ab]] dd I> if=/dev/zero of=/dev/sd$d bs=1M count=1024 done
I> Obviously, you won't be able to shutdown anymore after I> this, so you'll have to reset the machine for a reboot I> with the installer medium present.
I> If that still doesn't help, you probably really have a I> corrupted installer medium.
I> Good luck,
I> Roland _______________________________________________ I> Qlustar-General mailing list -- I> qlustar-general@qlustar.org To unsubscribe send an email I> to qlustar-general-leave@qlustar.org
I> -- Ian Kaufman Research Systems Administrator UC San Diego, I> Jacobs School of Engineering ikaufman AT ucsd DOT edu
--
I tried to change the IP, but it wouldn't let me overwrite "172" in the initial installer.
However, I logged into the console and made the change in the interfaces file. I'll see if it carries through. I can always reinstall if needed.
Ian
On Wed, May 22, 2019 at 2:39 PM Roland Fehrenbacher rf@q-leap.de wrote:
"I" == Ian Kaufman ikaufman@eng.ucsd.edu writes:
I> Yep, that was it ... the installer didn't like the existing sw I> RAID. And using dd I was able to reboot from the cli.
Good news, glad it worked.
I> One feature request ... allow the end user to pick their own I> private IP space. I prefer using 192.168.X.X as campus here uses I> 172 and 10 around campus.
You can already choose that freely. The suggested and initially displayed private network IP space is just determined by the number of compute nodes you select (this number is otherwise irrelevant) but can be changed to a different one in the 'IP address' field below 'Cluster network adapter'. This should also be explained when you press F1.
Best,
Roland
I> On Wed, May 22, 2019, 11:45 AM Ian Kaufman I> <ikaufman@eng.ucsd.edu> wrote: I> Thanks Roland, I> That is my gut feeling. I'll give it a try and let you know. I> Ian I> On Wed, May 22, 2019 at 11:44 AM Roland Fehrenbacher I> <rf@q-leap.de> wrote: >>>>>> "I" == Ian Kaufman <ikaufman@eng.ucsd.edu> writes: I> Hi Ian, I> I> Nope - no /live on the installed system I> I've had situations before where the installer barked I> when an old software RAID was present. Assuming that you I> really want to perform the install (which will wipe the I> disks anyway) you can try to just brute-force wipe the FS I> and RAID info before starting the installer. For that I> boot into your old system and do: I> for d in a b; do # Assume a mirror of /dev/sd[ab]] dd I> if=/dev/zero of=/dev/sd$d bs=1M count=1024 done I> Obviously, you won't be able to shutdown anymore after I> this, so you'll have to reset the machine for a reboot I> with the installer medium present. I> If that still doesn't help, you probably really have a I> corrupted installer medium. I> Good luck, I> Roland _______________________________________________ I> Qlustar-General mailing list -- I> qlustar-general@qlustar.org To unsubscribe send an email I> to qlustar-general-leave@qlustar.org I> -- Ian Kaufman Research Systems Administrator UC San Diego, I> Jacobs School of Engineering ikaufman AT ucsd DOT edu
-- _______________________________________________ Qlustar-General mailing list -- qlustar-general@qlustar.org To unsubscribe send an email to qlustar-general-leave@qlustar.org
"I" == Ian Kaufman ikaufman@eng.ucsd.edu writes:
I> I tried to change the IP, but it wouldn't let me overwrite "172" I> in the initial installer.
Ah, Ok, I didn't remember that. You'll really have to change the number of nodes to change it. E.g. 64 nodes will give you 192.168.xxx, 128 nodes 172.xxx and 512 nodes 10.xxx
I> However, I logged into the console and made the change in the I> interfaces file. I'll see if it carries through. I can always I> reinstall if needed.
That will require some hand-work later on with the risk of causing unnecessary trouble if some things are not changed. Better to reinstall at this stage.
I> On Wed, May 22, 2019 at 2:39 PM Roland Fehrenbacher I> rf@q-leap.de wrote:
>>>>>> "I" == Ian Kaufman ikaufman@eng.ucsd.edu writes:
I> I> Yep, that was it ... the installer didn't like the I> existing sw I> RAID. And using dd I was able to reboot I> from the cli.
I> Good news, glad it worked.
I> I> One feature request ... allow the end user to pick I> their own I> private IP space. I prefer using I> 192.168.X.X as campus here uses I> 172 and 10 around I> campus.
I> You can already choose that freely. The suggested and I> initially displayed private network IP space is just I> determined by the number of compute nodes you select (this I> number is otherwise irrelevant) but can be changed to a I> different one in the 'IP address' field below 'Cluster I> network adapter'. This should also be explained when you I> press F1.
I> Best,
I> Roland
Hmm, my cluster will have ~200 nodes (plus some NFS servers with interfaces on the private network).
Is there any way during installation to set 200 nodes and still use 192.168.X.X?
Thanks for your answers and insight so far :)
Ian
On Wed, May 22, 2019 at 3:01 PM Roland Fehrenbacher rf@q-leap.de wrote:
"I" == Ian Kaufman ikaufman@eng.ucsd.edu writes:
I> I tried to change the IP, but it wouldn't let me overwrite "172" I> in the initial installer.
Ah, Ok, I didn't remember that. You'll really have to change the number of nodes to change it. E.g. 64 nodes will give you 192.168.xxx, 128 nodes 172.xxx and 512 nodes 10.xxx
I> However, I logged into the console and made the change in the I> interfaces file. I'll see if it carries through. I can always I> reinstall if needed.
That will require some hand-work later on with the risk of causing unnecessary trouble if some things are not changed. Better to reinstall at this stage.
I> On Wed, May 22, 2019 at 2:39 PM Roland Fehrenbacher I> <rf@q-leap.de> wrote: >>>>>> "I" == Ian Kaufman <ikaufman@eng.ucsd.edu> writes: I> I> Yep, that was it ... the installer didn't like the I> existing sw I> RAID. And using dd I was able to reboot I> from the cli. I> Good news, glad it worked. I> I> One feature request ... allow the end user to pick I> their own I> private IP space. I prefer using I> 192.168.X.X as campus here uses I> 172 and 10 around I> campus. I> You can already choose that freely. The suggested and I> initially displayed private network IP space is just I> determined by the number of compute nodes you select (this I> number is otherwise irrelevant) but can be changed to a I> different one in the 'IP address' field below 'Cluster I> network adapter'. This should also be explained when you I> press F1. I> Best, I> Roland
Qlustar-General mailing list -- qlustar-general@qlustar.org To unsubscribe send an email to qlustar-general-leave@qlustar.org
"I" == Ian Kaufman ikaufman@eng.ucsd.edu writes:
I> Hmm, my cluster will have ~200 nodes (plus some NFS servers with I> interfaces on the private network).
I> Is there any way during installation to set 200 nodes and still I> use 192.168.X.X?
The only relevance of the value for 'number of nodes' is to set the suggested network range and the number of NFS threads running on the head-node in /etc/default/nfs-kernel-server (value of RPCNFSDCOUNT which you can easily change later if the calculated value is too low). So no problem to just choose 64 for 'number of nodes' e.g. in order to get your 192.168.x.x
I> Thanks for your answers and insight so far :)
You're welcome.
Roland
I> On Wed, May 22, 2019 at 3:01 PM Roland Fehrenbacher I> rf@q-leap.de wrote:
>>>>>> "I" == Ian Kaufman ikaufman@eng.ucsd.edu writes:
I> I> I tried to change the IP, but it wouldn't let me I> overwrite "172" I> in the initial installer.
I> Ah, Ok, I didn't remember that. You'll really have to change I> the number of nodes to change it. E.g. 64 nodes will give you I> 192.168.xxx, 128 nodes 172.xxx and 512 nodes 10.xxx
I> I> However, I logged into the console and made the change I> in the I> interfaces file. I'll see if it carries I> through. I can always I> reinstall if needed.
I> That will require some hand-work later on with the risk of I> causing unnecessary trouble if some things are not I> changed. Better to reinstall at this stage.