Hi , I am completely new with qlustar. I would need some help to start having the cluster running as I don't really understand the doc. I did install qlustar (4 times) with no problem , Since I did not find out how to run the virtual FE node (and the demo mode VM by the way ). The last install was without create a virtual FE node. Following the section "1.5.5. Spack Package Manager" , it assume that I installed the FE node which I did not . There' s no alternative. I tried in vain to switch from root to user softadm in the main terminal , the user softadm is not even in the /etc/passwd. Under root I try my luck with the spack command leading to : spack compiler find -bash: spack: command not found
How to do the spack section without creating the virtual FE node ?
Also I don't mind reinstalling qlustar with the FE VM node but I would appreciate how to run it.
Now here is my second question , how to run the demo VM ?
Here is my frustration sequences of event in case someone want to rewrite that part of the doc ;-)
demo-system -a start Starting VMs in a tmux session ... Created tmux session demo-system. => Attach with 'console-demo-vms'
So far so good so I ran console-demo-vms
Something happened and got a nice command line in the bottom of the terminal, but it looks like I am still on the front end , not in the demo VM , not only the prompt shows the front end hostname but htop command shows the whole ram of 32GB and CPU #, according to |/etc/qlustar/vm-configs/demo.conf|
|it should be : |
CN_MEM=1285 HA_MEM=1024
I am really confuse now, console-demo-vms did not show the demo VM ,
Any reason , what am I doing wrong here ?
Now from the head node under root :
0 root@jcluster ~ # dsh -a uptime beo-201: ssh: connect to host beo-201 port 22: No route to host beo-202: ssh: connect to host beo-202 port 22: No route to host beo-203: ssh: connect to host beo-203 port 22: No route to host beo-204: ssh: connect to host beo-204 port 22: No route to host
beo-201 are supposed to be the demo VMs,
I have many more questions but that's it for now but planing forward, I did managed to run QluMan_gui, I played over night (until 5am) trying to add a working user and add a node . etc ..
The doc for me for unusable, there's no tutorial online I could find (other than the first install which is very simple). I would be really appreciate if someone could describe the full procedure to add a node from catching the MAC address of the new node, building an image for that guy and include it in slurm. something to start with .
Thanks for your help .
I really would love to have qclusar working ASAP for my very urgent project research involving GPU
Thanks
Chris
Hi Christophe,
On 11/11/23 07:28, Christophe Guilbert wrote:
Hi , I am completely new with qlustar. I would need some help to start having the cluster running as I don't really understand the doc. I did install qlustar (4 times) with no problem , Since I did not find out how to run the virtual FE node (and the demo mode VM by the way ). The last install was without create a virtual FE node. Following the section "1.5.5. Spack Package Manager" , it assume that I installed the FE node which I did not . There' s no alternative. I tried in vain to switch from root to user softadm in the main terminal , the user softadm is not even in the /etc/passwd. Under root I try my luck with the spack command leading to : spack compiler find -bash: spack: command not found
How to do the spack section without creating the virtual FE node ?
you must do the spack setup on a netboot node. It's easiest to do on the virtual FE node.
Also I don't mind reinstalling qlustar with the FE VM node but I would appreciate how to run it.
It's automatically started at boot as a systemd service as explained in the first steps doc.
Now here is my second question , how to run the demo VM ?
Here is my frustration sequences of event in case someone want to rewrite that part of the doc ;-)
demo-system -a start Starting VMs in a tmux session ... Created tmux session demo-system. => Attach with 'console-demo-vms'
So far so good so I ran console-demo-vms
Something happened and got a nice command line in the bottom of the terminal, but it looks like I am still on the front end , not in the demo VM , not only the prompt shows the front end hostname but htop command shows the whole ram of 32GB and CPU #, according to |/etc/qlustar/vm-configs/demo.conf|
|it should be : |
CN_MEM=1285 HA_MEM=1024
I am really confuse now, console-demo-vms did not show the demo VM ,
Any reason , what am I doing wrong here ?
console-demo-vms starts a tmux sessionwith each demo node's console in one window. As explained in the first steps doc, you can change to different windows in the tmux sessionusing the <Ctrl-t n> key stroke. That way you can switch to the demo node's consoles.
Now from the head node under root :
0 root@jcluster ~ # dsh -a uptime beo-201: ssh: connect to host beo-201 port 22: No route to host beo-202: ssh: connect to host beo-202 port 22: No route to host beo-203: ssh: connect to host beo-203 port 22: No route to host beo-204: ssh: connect to host beo-204 port 22: No route to host
beo-201 are supposed to be the demo VMs,
The demo nodes have not booted for some reason, you should check their console (see above) for errors.
I have many more questions but that's it for now but planing forward, I did managed to run QluMan_gui, I played over night (until 5am) trying to add a working user and add a node . etc ..
The doc for me for unusable, there's no tutorial online I could find (other than the first install which is very simple). I would be really appreciate if someone could describe the full procedure to add a node from catching the MAC address of the new node, building an image for that guy and include it in slurm. something to start with .
It's explained in the QluMan docs.
Good luck,
Roland
Thanks for your help .
I really would love to have qclusar working ASAP for my very urgent project research involving GPU
Thanks
Chris
Thanks for the very fast answer Roland.
I did learn tmux on my linux workstation, beside the fact that its really cool and plan to use it the future, I am still struggling to have the demo and fe working in tmux
When starting the demo-system (*demo-system -a start)* -> console-demo-vms, I know as a fact now that I am in tmux but I am still on the c1-head. any Ctrlr-t n (or p) said "no next windows" , not only I can't switch to demo but also fe which if I understand correctly must have started automatically.
Once I exit from tmux with Ctrl-t d,
tmux ls gives: tmux ls error connecting to /tmp/tmux-0/default (No such file or directory)
I don't see demo or fe.
When trying to start console-fe-vm console-fe-vm No tmux session 'login-vm' with server socket 'ql-vms'
What's happening here ? note that during the install I rename the host c1-head jcluster and the fe jclustert (jcluster and jclustert are found in the DNS). Would it change anything ?
*I I do * 0 root@jcluster ~ # dsh -a uptime beo-201: ssh: connect to host beo-201 port 22: No route to host beo-202: ssh: connect to host beo-202 port 22: No route to host beo-203: ssh: connect to host beo-203 port 22: No route to host beo-204: ssh: connect to host beo-204 port 22: No route to host login-c: ssh: connect to host login-c port 22: No route to host
So yes , no demo or fe connection. Why do you thing is happening.
Thanks for your help
On 11/11/23 03:59, Roland Fehrenbacher wrote:
Hi Christophe,
On 11/11/23 07:28, Christophe Guilbert wrote:
Hi , I am completely new with qlustar. I would need some help to start having the cluster running as I don't really understand the doc. I did install qlustar (4 times) with no problem , Since I did not find out how to run the virtual FE node (and the demo mode VM by the way ). The last install was without create a virtual FE node. Following the section "1.5.5. Spack Package Manager" , it assume that I installed the FE node which I did not . There' s no alternative. I tried in vain to switch from root to user softadm in the main terminal , the user softadm is not even in the /etc/passwd. Under root I try my luck with the spack command leading to : spack compiler find -bash: spack: command not found
How to do the spack section without creating the virtual FE node ?
you must do the spack setup on a netboot node. It's easiest to do on the virtual FE node.
Also I don't mind reinstalling qlustar with the FE VM node but I would appreciate how to run it.
It's automatically started at boot as a systemd service as explained in the first steps doc.
Now here is my second question , how to run the demo VM ?
Here is my frustration sequences of event in case someone want to rewrite that part of the doc ;-)
demo-system -a start Starting VMs in a tmux session ... Created tmux session demo-system. => Attach with 'console-demo-vms'
So far so good so I ran console-demo-vms
Something happened and got a nice command line in the bottom of the terminal, but it looks like I am still on the front end , not in the demo VM , not only the prompt shows the front end hostname but htop command shows the whole ram of 32GB and CPU #, according to |/etc/qlustar/vm-configs/demo.conf|
|it should be : |
CN_MEM=1285 HA_MEM=1024
I am really confuse now, console-demo-vms did not show the demo VM ,
Any reason , what am I doing wrong here ?
console-demo-vms starts a tmux sessionwith each demo node's console in one window. As explained in the first steps doc, you can change to different windows in the tmux sessionusing the <Ctrl-t n> key stroke. That way you can switch to the demo node's consoles.
Now from the head node under root :
0 root@jcluster ~ # dsh -a uptime beo-201: ssh: connect to host beo-201 port 22: No route to host beo-202: ssh: connect to host beo-202 port 22: No route to host beo-203: ssh: connect to host beo-203 port 22: No route to host beo-204: ssh: connect to host beo-204 port 22: No route to host
beo-201 are supposed to be the demo VMs,
The demo nodes have not booted for some reason, you should check their console (see above) for errors.
I have many more questions but that's it for now but planing forward, I did managed to run QluMan_gui, I played over night (until 5am) trying to add a working user and add a node . etc ..
The doc for me for unusable, there's no tutorial online I could find (other than the first install which is very simple). I would be really appreciate if someone could describe the full procedure to add a node from catching the MAC address of the new node, building an image for that guy and include it in slurm. something to start with .
It's explained in the QluMan docs.
Good luck,
Roland
Thanks for your help .
I really would love to have qclusar working ASAP for my very urgent project research involving GPU
Thanks
Chris
Qlustar General mailing list --qlustar-general@qlustar.org To unsubscribe send an email toqlustar-general-leave@qlustar.org
Hi Christophe,
it's clear now that the FE and demo VMs don't start properly for some reason. Please check that your head-node is capable of running KVM virtual machines at all by executing
$ lsmod | grep kvm_
If this doesn't show either kvm_intel or kvm_amd, virtualization support has been disabled in the BIOS of your head and you need to re-enable it.
If kvm_amd/intel is loaded you can try to startup the FE node VM directly with the Qlustar KVM management script on the cmdline as root:
$ manage-kvm-guest -a start -n login
This should then tell you where things go wrong.
BTW: Renaming head and FE to jcluster(t) while executing the installer is supported and cannot cause the problem.
Best,
Roland
On 11/13/23 22:53, Christophe Guilbert wrote:
Thanks for the very fast answer Roland.
I did learn tmux on my linux workstation, beside the fact that its really cool and plan to use it the future, I am still struggling to have the demo and fe working in tmux
When starting the demo-system (*demo-system -a start)* -> console-demo-vms, I know as a fact now that I am in tmux but I am still on the c1-head. any Ctrlr-t n (or p) said "no next windows" , not only I can't switch to demo but also fe which if I understand correctly must have started automatically.
Once I exit from tmux with Ctrl-t d,
tmux ls gives: tmux ls error connecting to /tmp/tmux-0/default (No such file or directory)
I don't see demo or fe.
When trying to start console-fe-vm console-fe-vm No tmux session 'login-vm' with server socket 'ql-vms'
What's happening here ? note that during the install I rename the host c1-head jcluster and the fe jclustert (jcluster and jclustert are found in the DNS). Would it change anything ?
*I I do
0 root@jcluster ~ # dsh -a uptime beo-201: ssh: connect to host beo-201 port 22: No route to host beo-202: ssh: connect to host beo-202 port 22: No route to host beo-203: ssh: connect to host beo-203 port 22: No route to host beo-204: ssh: connect to host beo-204 port 22: No route to host login-c: ssh: connect to host login-c port 22: No route to host
So yes , no demo or fe connection. Why do you thing is happening.
Thanks for your help
Thanks Roland, Yes kvm works way better with virualization support ! ;-) I really thought I did turn it on while inspecting the bios prior to install !!! Any way moving forward to new installation problem in spack :
first , I completely missed the softadm password ( I got the one for demo) . I was giving all the same password each time I got a request to give one . I guess this one was generated like the demo account but I completely missed it. Not a big deal , try to change it under root in fe but was not allowed. Use QluMan/ldap for that. was able to ssh softadm@jclustert (my fe hostname) to initialize the stack.
Here is the error message when trying to initialize the stack using the "*spack install gcc@13.1.0 target=x86_64" command * * * *....* **No patches needed for gmake ==>gmake: Executing phase: 'autoreconf' ==>gmake: Executing phase: 'configure' ==>gmake: Executing phase: 'build' ==>gmake: Executing phase: 'install' ==>gmake: Successfully installed gmake-4.4.1-d3ghmecmrwxdzizqkb2vfxixc4a7xyu4 Stage: 2.72s. Autoreconf: 0.00s. Configure: 15.33s. Build: 9.31s. Install: 0.03s. Post-install: 0.07s. Total: 27.71s [+]/usr/share/spack/root/opt/spack/linux-ubuntu22.04-x86_64/gcc-11.4.0/gmake-4.4.1-d3ghmecmrwxdzizqkb2vfxixc4a7xyu4 [+]/usr (external m4-1.4.18-6mn3iho2fr6edfzw7cx4oju3h3nl5blf) [+]/usr (external perl-5.34.0-hjugzvixfyfpl6yufq5injfzrafmzycu) ==>Waiting forautoconf-archive-2023.02.20-6mk7s3gez7sjc2h4edsgosm2xpetcel6 ==>Installingautoconf-archive-2023.02.20-6mk7s3gez7sjc2h4edsgosm2xpetcel6 ==>No binary for autoconf-archive-2023.02.20-6mk7s3gez7sjc2h4edsgosm2xpetcel6 found: installing from source ==>Fetching http://ftpmirror.gnu.org/autoconf-archive/autoconf-archive-2023.02.20.tar.xz ==>No patches needed for autoconf-archive ==>autoconf-archive: Executing phase: 'autoreconf' ==>autoconf-archive: Executing phase: 'configure' ==>autoconf-archive: Executing phase: 'build' ==>autoconf-archive: Executing phase: 'install' ==>autoconf-archive: Successfully installed autoconf-archive-2023.02.20-6mk7s3gez7sjc2h4edsgosm2xpetcel6 Stage: 2.74s. Autoreconf: 0.00s. Configure: 1.59s. Build: 0.05s. Install: 1.89s. Post-install: 0.87s. Total: 7.35s [+]/usr/share/spack/root/opt/spack/linux-ubuntu22.04-x86_64/gcc-11.4.0/autoconf-archive-2023.02.20-6mk7s3gez7sjc2h4edsgosm2xpetcel6 [+]/usr (external texinfo-6.8-r5t34jrn4b3pnquzayc3f5yqnfc6ndet) ==>Waiting forzlib-1.2.13-qld66xb7dn5a33fda5qkb4emglsmn6ah ==>Installingzlib-1.2.13-qld66xb7dn5a33fda5qkb4emglsmn6ah ==>No binary for zlib-1.2.13-qld66xb7dn5a33fda5qkb4emglsmn6ah found: installing from source ==>Fetching http://zlib.net/fossils/zlib-1.2.13.tar.gz ==>No patches needed for zlib ==>zlib: Executing phase: 'edit' ==>zlib: Executing phase: 'build' ==>zlib: Executing phase: 'install' ==>zlib: Successfully installed zlib-1.2.13-qld66xb7dn5a33fda5qkb4emglsmn6ah Stage: 0.44s. Edit: 0.81s. Build: 1.99s. Install: 0.25s. Post-install: 0.06s. Total: 3.71s [+]/usr/share/spack/root/opt/spack/linux-ubuntu22.04-x86_64/gcc-11.4.0/zlib-1.2.13-qld66xb7dn5a33fda5qkb4emglsmn6ah [+]/usr (external zstd-1.4.8-g7zt2gg24fodmzluy7sbmdgfu5klqhjg) ==>Waiting forlibtool-2.4.7-sltp3y2ylxczu4arn5pruq4knb6y7j3a ==>Installinglibtool-2.4.7-sltp3y2ylxczu4arn5pruq4knb6y7j3a ==>No binary for libtool-2.4.7-sltp3y2ylxczu4arn5pruq4knb6y7j3a found: installing from source ==>Fetching http://ftpmirror.gnu.org/libtool/libtool-2.4.7.tar.gz ==>Error: FetchError: All fetchers failed for spack-stage-libtool-2.4.7-sltp3y2ylxczu4arn5pruq4knb6y7j3a ==>Warning: Skipping build of gmp-6.2.1-f6ku7x6kk77rw6kknylbokmv26rwaaio since libtool-2.4.7-sltp3y2ylxczu4arn5pruq4knb6y7j3a failed ==>Warning: Skipping build of gcc-13.1.0-r6tkqt2qpvdhpvdwb67scwcupu5q7q6n since gmp-6.2.1-f6ku7x6kk77rw6kknylbokmv26rwaaio failed ==>Warning: Skipping build of mpc-1.3.1-lpzenm7ukqv5r2cty3hlxaeeovvz5t2w since gmp-6.2.1-f6ku7x6kk77rw6kknylbokmv26rwaaio failed ==>Warning: Skipping build of mpfr-4.2.0-7d7gia35xoeak45h65jqkedahg3dlios since gmp-6.2.1-f6ku7x6kk77rw6kknylbokmv26rwaaio failed ==>Waiting forautoconf-2.69-4v7zhe6rx5zlrdn3kztxwmzs6dmbed2p ==>Installingautoconf-2.69-4v7zhe6rx5zlrdn3kztxwmzs6dmbed2p ==>No binary for autoconf-2.69-4v7zhe6rx5zlrdn3kztxwmzs6dmbed2p found: installing from source ==>Error: FetchError: All fetchers failed for spack-stage-autoconf-2.69-4v7zhe6rx5zlrdn3kztxwmzs6dmbed2p ==>Warning: Skipping build of automake-1.16.5-ubievawv3tcjygyiofl2jevdqdahatpv since autoconf-2.69-4v7zhe6rx5zlrdn3kztxwmzs6dmbed2p failed ==>Error: gcc-13.1.0-r6tkqt2qpvdhpvdwb67scwcupu5q7q6n: Package was not installed ==>Error: Installation request failed. Refer to reported errors for failing package(s).
its just the end of the screen log, please let me know if you need the whole log.
Bug ?
Thanks
Chris
On 11/14/23 01:38, Roland Fehrenbacher wrote:
Hi Christophe,
it's clear now that the FE and demo VMs don't start properly for some reason. Please check that your head-node is capable of running KVM virtual machines at all by executing
$ lsmod | grep kvm_
If this doesn't show either kvm_intel or kvm_amd, virtualization support has been disabled in the BIOS of your head and you need to re-enable it.
If kvm_amd/intel is loaded you can try to startup the FE node VM directly with the Qlustar KVM management script on the cmdline as root:
$ manage-kvm-guest -a start -n login
This should then tell you where things go wrong.
BTW: Renaming head and FE to jcluster(t) while executing the installer is supported and cannot cause the problem.
Best,
Roland
On 11/13/23 22:53, Christophe Guilbert wrote:
Thanks for the very fast answer Roland.
I did learn tmux on my linux workstation, beside the fact that its really cool and plan to use it the future, I am still struggling to have the demo and fe working in tmux
When starting the demo-system (*demo-system -a start)* -> console-demo-vms, I know as a fact now that I am in tmux but I am still on the c1-head. any Ctrlr-t n (or p) said "no next windows" , not only I can't switch to demo but also fe which if I understand correctly must have started automatically.
Once I exit from tmux with Ctrl-t d,
tmux ls gives: tmux ls error connecting to /tmp/tmux-0/default (No such file or directory)
I don't see demo or fe.
When trying to start console-fe-vm console-fe-vm No tmux session 'login-vm' with server socket 'ql-vms'
What's happening here ? note that during the install I rename the host c1-head jcluster and the fe jclustert (jcluster and jclustert are found in the DNS). Would it change anything ?
*I I do
0 root@jcluster ~ # dsh -a uptime beo-201: ssh: connect to host beo-201 port 22: No route to host beo-202: ssh: connect to host beo-202 port 22: No route to host beo-203: ssh: connect to host beo-203 port 22: No route to host beo-204: ssh: connect to host beo-204 port 22: No route to host login-c: ssh: connect to host login-c port 22: No route to host
So yes , no demo or fe connection. Why do you thing is happening.
Thanks for your help
Qlustar General mailing list --qlustar-general@qlustar.org To unsubscribe send an email toqlustar-general-leave@qlustar.org
On 11/14/23 17:00, Christophe Guilbert wrote:
Thanks Roland, Yes kvm works way better with virualization support ! ;-) I really thought I did turn it on while inspecting the bios prior to install !!! Any way moving forward to new installation problem in spack :
first , I completely missed the softadm password ( I got the one for demo) . I was giving all the same password each time I got a request to give one . I guess this one was generated like the demo account but I completely missed it. Not a big deal , try to change it under root in fe but was not allowed. Use QluMan/ldap for that. was able to ssh softadm@jclustert (my fe hostname) to initialize the stack.
OK, very good it worked out so far.
Here is the error message when trying to initialize the stack using the "*spack install gcc@13.1.0 target=x86_64" command .....
Just try to re-execute. We've had these errors with spack 0.19.x occasionally as well and it worked afterwards.
Hi Roland , Thanks a lot. nope , did it twice ... no luck
1 jclustert:~$ spack compiler find ==>Found no new compilers ==>Compilers are defined in the following files: /usr/share/spack/root/opt/.user/config/linux/compilers.yam 0 jclustert:~$ spack install gcc@13.1.0 target=x86_64 [+]/usr (external diffutils-3.8-wztco34tjwhabrmr2ds2a2apor3giyk5) [+]/usr (external gawk-5.1.0-4eyu3rsi3uy24lm4bp3dicwgdougacnb) ==>Waiting forgmake-4.4.1-d3ghmecmrwxdzizqkb2vfxixc4a7xyu4 [+]/usr/share/spack/root/opt/spack/linux-ubuntu22.04-x86_64/gcc-11.4.0/gmake-4.4.1-d3ghmecmrwxdzizqkb2vfxixc4a7xyu4 [+]/usr (external m4-1.4.18-6mn3iho2fr6edfzw7cx4oju3h3nl5blf) [+]/usr (external perl-5.34.0-hjugzvixfyfpl6yufq5injfzrafmzycu) ==>Waiting forautoconf-archive-2023.02.20-6mk7s3gez7sjc2h4edsgosm2xpetcel6 [+]/usr/share/spack/root/opt/spack/linux-ubuntu22.04-x86_64/gcc-11.4.0/autoconf-archive-2023.02.20-6mk7s3gez7sjc2h4edsgosm2xpetcel6 [+]/usr (external texinfo-6.8-r5t34jrn4b3pnquzayc3f5yqnfc6ndet) ==>Waiting forzlib-1.2.13-qld66xb7dn5a33fda5qkb4emglsmn6ah [+]/usr/share/spack/root/opt/spack/linux-ubuntu22.04-x86_64/gcc-11.4.0/zlib-1.2.13-qld66xb7dn5a33fda5qkb4emglsmn6ah [+]/usr (external zstd-1.4.8-g7zt2gg24fodmzluy7sbmdgfu5klqhjg) ==>Waiting forlibtool-2.4.7-sltp3y2ylxczu4arn5pruq4knb6y7j3a [+]/usr/share/spack/root/opt/spack/linux-ubuntu22.04-x86_64/gcc-11.4.0/libtool-2.4.7-sltp3y2ylxczu4arn5pruq4knb6y7j3a ==>Waiting forautoconf-2.69-4v7zhe6rx5zlrdn3kztxwmzs6dmbed2p [+]/usr/share/spack/root/opt/spack/linux-ubuntu22.04-x86_64/gcc-11.4.0/autoconf-2.69-4v7zhe6rx5zlrdn3kztxwmzs6dmbed2p ==>Waiting forautomake-1.16.5-ubievawv3tcjygyiofl2jevdqdahatpv ==>Installingautomake-1.16.5-ubievawv3tcjygyiofl2jevdqdahatpv ==>No binary for automake-1.16.5-ubievawv3tcjygyiofl2jevdqdahatpv found: installing from source ==>Fetching http://ftpmirror.gnu.org/automake/automake-1.16.5.tar.gz ==>Error: FetchError: All fetchers failed for spack-stage-automake-1.16.5-ubievawv3tcjygyiofl2jevdqdahatpv ==>Warning: Skipping build of gmp-6.2.1-f6ku7x6kk77rw6kknylbokmv26rwaaio since automake-1.16.5-ubievawv3tcjygyiofl2jevdqdahatpv failed ==>Warning: Skipping build of mpc-1.3.1-lpzenm7ukqv5r2cty3hlxaeeovvz5t2w since gmp-6.2.1-f6ku7x6kk77rw6kknylbokmv26rwaaio failed ==>Warning: Skipping build of gcc-13.1.0-r6tkqt2qpvdhpvdwb67scwcupu5q7q6n since mpc-1.3.1-lpzenm7ukqv5r2cty3hlxaeeovvz5t2w failed ==>Warning: Skipping build of mpfr-4.2.0-7d7gia35xoeak45h65jqkedahg3dlios since gmp-6.2.1-f6ku7x6kk77rw6kknylbokmv26rwaaio failed ==>Error: gcc-13.1.0-r6tkqt2qpvdhpvdwb67scwcupu5q7q6n: Package was not installed ==>Error: Installation request failed. Refer to reported errors for failing package(s).
On 11/14/23 09:11, Roland Fehrenbacher wrote:
On 11/14/23 17:00, Christophe Guilbert wrote:
Thanks Roland, Yes kvm works way better with virualization support ! ;-) I really thought I did turn it on while inspecting the bios prior to install !!! Any way moving forward to new installation problem in spack :
first , I completely missed the softadm password ( I got the one for demo) . I was giving all the same password each time I got a request to give one . I guess this one was generated like the demo account but I completely missed it. Not a big deal , try to change it under root in fe but was not allowed. Use QluMan/ldap for that. was able to ssh softadm@jclustert (my fe hostname) to initialize the stack.
OK, very good it worked out so far.
Here is the error message when trying to initialize the stack using the "*spack install gcc@13.1.0 target=x86_64" command .....
Just try to re-execute. We've had these errors with spack 0.19.x occasionally as well and it worked afterwards.
Qlustar General mailing list --qlustar-general@qlustar.org To unsubscribe send an email toqlustar-general-leave@qlustar.org
This is most likely due to the package ca-certificates being missing in the chroot. Please try to change into the jammy chroot and install the package manually:
$ chroot-jammy $ apt install ca-certificates
Then try the spack install again.
On 11/14/23 18:47, Christophe Guilbert wrote:
Hi Roland , Thanks a lot. nope , did it twice ... no luck
It worked , thanks. Moving forward now, will surely come back to you ! 🙂
On 11/15/23 05:28, Roland Fehrenbacher wrote:
This is most likely due to the package ca-certificates being missing in the chroot. Please try to change into the jammy chroot and install the package manually:
$ chroot-jammy $ apt install ca-certificates
Then try the spack install again.
On 11/14/23 18:47, Christophe Guilbert wrote:
Hi Roland , Thanks a lot. nope , did it twice ... no luck
Qlustar General mailing list -- qlustar-general@qlustar.org To unsubscribe send an email to qlustar-general-leave@qlustar.org