//
you're reading...
DevOps, Vagrant

Dreaded “Waiting for VM to boot. This can take a few minutes.”

So, I ran into the dreaded “Waiting for VM to boot. This can take a few minutes.” while doing vagrant up.

I thought I had this resolved in my basebox with the postinstall fixups for udev, see DevOps Toolbox

Well, guess not.

After a bit of hunting around on google, I found this post:

While, a lot of didn’t really apply to my exact situation, I was able to ssh to the 2210 vagrant setup for the VM, and dhclient was running on the interface already, restarting networking didn’t help.

I was at a bit of loss on what the issue was.

I thought maybe since eth1 wasn’t up, maybe the vagrant was waiting for it, so I put a static IP on it with the IP I set in the Vagrant file.

No luck, so I figured maybe it needed to be DHCP so the dhcp saw the lease or something.

While copying the existing ifcfg-eth0 to ifcfg-eth1 it hit me.

ifcfg-eth0 has the UUID set from the install.

All I had to do was remove the UUID from the ifcfg-eth0 file and restart networking and the vagrant up completed with no issues, even configured the eth1 correctly.

So, I will modify the postinstall to run a sed against ifcfg-eth0 to remove the UUID line when making a basebox and re-export the basebox again so my VM’s vagrant up with no issues.

Advertisements

Discussion

4 thoughts on “Dreaded “Waiting for VM to boot. This can take a few minutes.”

  1. UPDATE:
    Additions to postinstall.sh

    echo "removing UUID from ifcfg-eth0
    sed -i 's/UUID.*//' /etc/sysconfig/network-scripts/ifcfg-eth0 
    

    Posted by koaps | August 7, 2013, 11:07 am
  2. UPDATE 2:
    Still ran into an issue, so I removed the MAC and changed NM_CONTROLLED=”no” and my basebox was able to build both cluster VM’s with no stalls.

    Still need to test some more, but so far it’s good.

    Posted by koaps | August 7, 2013, 11:07 am
  3. UPDATE 3:

    So think I finally found the culprit.

    Being on wireless, the VM’s won’t start, being wired, they do.

    Might have to try bridged mode with the nic’s instead of NAT when wireless, which is odd because normal VM’s in Virtualbox startup and use NAT ok.

    The Vagrant VM does fully start, but the vagrant up never finishes.

    Posted by koaps | August 7, 2013, 11:09 am
  4. UPDATE 3:

    Think I got it beat.

    I was able to consistently reproduce the issue when wired vs wireless.

    Tried everything I could find via google, seems a lof of people who get this issue have issues with the VM itself having broken network connection needs to be restarted to get a DHCP lease. This wasn’t the case for me, I was always able to ssh to the VM in another terminal, but vagrant up never completes, I tried bridged interfaces, ripping out the NAT setup in the VM config and some other stuff.

    I finally read something somewhere that said something about DNS issues, and while my VM’s resolv.conf had a 4.2.2.4 DNS server, I thought there might be something to this.

    I know from past experiences that DNS lookups failing can really slow down SSH connections and connecting to the local VM was very slow.

    So I put in my standard testing server SSH settings:

    sed -i ‘s/#TCPKeepAlive yes/TCPKeepAlive yes/’ /etc/ssh/sshd_config
    sed -i ‘s/#ClientAliveInterval 0/ClientAliveInterval 130/’ /etc/ssh/sshd_config
    sed -i ‘s/#UseDNS yes/UseDNS no/’ /etc/ssh/sshd_config

    The main one being “UseDNS no”.

    I rebuilt the basebox, and I was able to bring up the cluster VM’s with no stalls while on wireless.

    Great Success!!!

    Posted by koaps | August 9, 2013, 8:59 pm

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s