Oliver Rath | 31 Jul 20:34 2014
Picon

dhcp 44 (netbios) useable as setting?

Hi list,

is it possible to use dhcp 44 (ip address of netbios daemon) in a
ipxe-setting? i didnt found anything about this in the documentation.
Maybe there is a generic way to use dhcp-entries?

I.e. something like

echo ${dhcp/44}
or
echo ${net0.dhcp/44:ipv4}
?

Regards,
Oliver

Michael Brown | 30 Jul 00:20 2014

Xen netfront support

I'm please to announce that iPXE now natively supports Xen netfront 
virtual NICs.  This may be of interest to anyone wanting to use iPXE 
running in a Xen PV-HVM domain.

I have one issue which I'm not sure how to handle, and would appreciate 
feedback from anyone who frequently uses Xen:

For each configured NIC, Xen exposes both an emulated PCI NIC (for OSes 
with no native Xen drivers) and a netfront virtual NIC (for OSes with 
native Xen drivers).  Xen provides a mechanism for OSes to "unplug" the 
emulated PCI NICs.  Operating systems with native Xen drivers will 
typically unplug the emulated PCI NICs to prevent confusion.

iPXE could easily unplug the emulated PCI NICs.  However, this operation 
is irreversible.  If the loaded OS does not include native Xen drivers, 
then it will not be able to see any network devices.  This is undesirable.

iPXE could easily expose both the emulated PCI NICs and the netfront 
virtual NICs (as e.g. net0 and net1).  This works perfectly (since the 
backends are entirely independent; the "hardware" really _is_ providing 
two NICs which happen to have the same MAC address and be connected to 
the same network), but it's somewhat confusing for a user to see:

   iPXE> ifstat
   net0: 56:3c:dc:73:51:7e using netfront on vif/0 (open)
     [Link:up, TX:0 TXE:0 RX:0 RXE:0]
   net1: 56:3c:dc:73:51:7e using rtl8139 on PCI00:04.0 (open)
     [Link:up, TX:0 TXE:0 RX:0 RXE:0]

iPXE cannot easily mask out just the emulated NICs, since we have no 
(Continue reading)

sujan karanjeet | 28 Jul 10:38 2014
Picon

Issues with some Network Cards

Hello Everyone,

I've been having issues booting up with some network controllers like Marvell Yukon 88E8055 PCI-E Gigabit Ethernet Controller (10/100/1000MBit). The system finds the DHCP IP address and gets into iPXE but it gets stuck while trying to get the link up (net0). Could someone please help me with this.


This is what I see when booting:

iPXE initialising devices...ok

iPXE 1.0.0+ (e0478) -- Open Source Network Boot Firmware -- http://ipxe.org
Features: HTTP iSCSI DNS TFTP AoE SRP bzImage ELF MBOOT PXE PXEXT Menu

net 0 :00:17:42:c8:0c:8c:8e using m88e8055 on PCI00:00.0 (open)
[Link: down, TX:0 TXE:0 RX:0 RXE:0]
[Link status: Down (http:/ipxe.org/38086101)]
Waiting for link-up on net0.............. --- Gets stuck here ---


Thanks in advance,
Sujan 
 
<div><div dir="ltr">
<div>Hello Everyone,<br><br>
I've been having issues booting up with some network controllers like 
Marvell Yukon 88E8055 PCI-E Gigabit Ethernet Controller 
(10/100/1000MBit). The system finds the DHCP IP address and gets into iPXE but it gets stuck while trying to get the link up (net0). Could someone please help me with this.<br><br><br>
</div>
This is what I see when booting:<br><div>
<br><blockquote class="gmail_quote">
iPXE initialising devices...ok<br><br>
iPXE 1.0.0+ (e0478) -- Open Source Network Boot Firmware -- <a href="http://ipxe.org" target="_blank">http://ipxe.org</a><br>
Features: HTTP iSCSI DNS TFTP AoE SRP bzImage ELF MBOOT PXE PXEXT Menu<br><br>
net 0 :00:17:42:c8:0c:8c:8e using m88e8055 on PCI00:00.0 (open)<br>
    [Link: down, TX:0 TXE:0 RX:0 RXE:0]<br>
    [Link status: Down (http:/<a href="http://ipxe.org/38086101">ipxe.org/38086101</a>)]<br>
Waiting for link-up on net0..............  --- Gets stuck here ---<br>
</blockquote>
<br><br>
Thanks in advance,<br>
Sujan&nbsp;
				<br>&nbsp;<br>
</div>
</div></div>
Mike Harris | 22 Jul 15:44 2014

iPXE, ESXi 5.5 Stateless + Caching Install - BMP Razor + Chef Integration, Routed iSCSI, IaaS block

Greetings!

I'm currently using local storage (2-way mirror, LSI controller) to booting a "test rabbit" SuperMicro blade in my lab.

High level;

+ X8DTT-H Motherboard
+ Intel X540 NIC (dual 10G copper)

We commonly use these blades and would like to bare metal provision a failed blade from a last known good state.  Final profile applied by an ESM tool like Chef. 

Currently the blade use DAS.  It works, but an iSRB is better and more convenient (from a hardware point of view).

I would like to;

+ iPXE boot,
+ Attach an SAN volume for a boot device (1),
+ have Razor; factor, tag, kickstart the ESXi 5.5 install process.
+ Then broker the node to Chef for final provisioning. 

I've been able to get most of this working aside from the SAN volume (1).  Routed iSCSI/NFS SAN volume is a challenge since the default iPXE binary doesn't support vcreate.  I haven't found any exampled of routed iSCSI (or NFS), I'm sure someone has, hopefully they're on this mailing list.

If anyone has any tips on routed iSCSI/NFS boot volumes, and Razor/Chef integration experience, I'd be move appreciative for some feedback on how you managed iSRB.  I have a pretty network diagram of the POC which I'm happy to share if you're interested.

Although the reward is strictly karma at this point, I may have a bunch of Chef work that needs doing that could lead to some meaningful PS for a couple of ninjas in a cool location or two.

May the force be with you!

Mike



<div>
<div>Greetings!<br><br>
I'm currently using local storage (2-way mirror, LSI controller) to booting a "test rabbit" SuperMicro blade in my lab.<br><br>
High level;<br><br>
+ X8DTT-H Motherboard<br>
+ Intel X540 NIC (dual 10G copper)<br><br>
We commonly use these blades and would like to bare metal provision a failed blade from a last known good state.&nbsp; Final profile applied by an ESM tool like Chef.&nbsp;
<br><br>
Currently the blade use DAS.&nbsp; It works, but an iSRB is better and more convenient (from a hardware point of view).<br><br>
I would like to; <br><br>
+ iPXE boot, <br>
+ Attach an SAN volume for a boot device (1), <br>
+ have Razor; factor, tag, kickstart the ESXi 5.5 install process. <br>
+ Then broker the node to Chef for final provisioning.&nbsp; <br><br>
I've been able to get most of this working aside from the SAN volume (1).&nbsp; Routed iSCSI/NFS SAN volume is a challenge since the default iPXE binary doesn't support vcreate.&nbsp; I haven't found any exampled of routed iSCSI (or NFS), I'm sure someone has, hopefully
 they're on this mailing list.<br><br>
If anyone has any tips on routed iSCSI/NFS boot volumes, and Razor/Chef integration experience, I'd be move appreciative for some feedback on how you managed iSRB.&nbsp; I have a pretty network diagram of the POC which I'm happy to share if you're interested.<br><br>
Although the reward is strictly karma at this point, I may have a bunch of Chef work that needs doing that could lead to some meaningful PS for a couple of ninjas in a cool location or two.<br><br>
May the force be with you!<br><br>
Mike<br><br><br><br>
</div>
</div>
Charan R | 21 Jul 12:16 2014
Picon

iPXE slow boot

Hi 

I am using ipxe rom image(8086100f.mrom) on one of ESXi host vm and i can able to do sanboot from the vm, so that it can boot from taget lun(ESXi image) without any error.

But, booting the esxi image from the target lun its taking more than 4-5 hours..

could you please help me on this issue..

Regards,
Charan
<div><div dir="ltr">
<span>Hi&nbsp;</span><div><br></div>
<div>I am using ipxe rom image(8086100f.mrom) on one of ESXi host vm and i can able to do sanboot from the vm, so that it can boot from taget lun(ESXi image) without any error.</div>

<div><br></div>
<div>But, booting the esxi image from the target lun its taking more than 4-5 hours..</div>
<div>

<br>
</div>
<div>could you please help me on this issue..</div>
<div><br></div>
<div>

Regards,</div>
<div>Charan</div>
</div></div>
Sven Ulland | 21 Jul 15:41 2014
Picon
Picon

[PATCH] [lacp] Set 'aggregatable' flag in response LACPDU

Some switches do not allow an individual link (as defined in IEEE Std
802.3ad-2000 section 43.3.5) to work alone in a link aggregation group
as described in section 43.3.6. This is verified on Dell's PowerConnect
M6220, based on the Broadcom Strata XGS-IV chipset.

Set the LACP_STATE_AGGREGATABLE flag in the actor.state field to
announce link aggregation in the response LACPDU, which will have the
switch enable the link aggregation group and allow frames to pass.
---
 src/net/eth_slow.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/src/net/eth_slow.c b/src/net/eth_slow.c
index 69c38f3..db54b55 100644
--- a/src/net/eth_slow.c
+++ b/src/net/eth_slow.c
 <at>  <at>  -167,7 +167,8  <at>  <at>  static int eth_slow_lacp_rx ( struct io_buffer *iobuf,
 	lacp->actor.key = htons ( 1 );
 	lacp->actor.port_priority = htons ( LACP_PORT_PRIORITY_MAX );
 	lacp->actor.port = htons ( 1 );
-	lacp->actor.state = ( LACP_STATE_IN_SYNC |
+	lacp->actor.state = ( LACP_STATE_AGGREGATABLE |
+			      LACP_STATE_IN_SYNC |
 			      LACP_STATE_COLLECTING |
 			      LACP_STATE_DISTRIBUTING |
 			      ( lacp->partner.state & LACP_STATE_FAST ) );
--

-- 
2.0.1

NICOLAS CATTIE | 21 Jul 11:32 2014

TR: undi api call failed

Hi,

 

Could this error message be caused by a mass storage driver that is missing ?

If I use the same WinPE 64 with ISO boot, it works

 

Thank you

 

Nicolas

 

De : NICOLAS CATTIE - U115440
Envoyé : mardi 8 juillet 2014 19:47
À : 'ipxe-devel-Ajx3hB6KsW1nerjlECmc1w@public.gmane.org'
Objet : undi api call failed

 

Hi list,

 

I have a new problem on HP ws460 Gen 8 blade workstations : when winpe should start (after chaining with wimboot), I only see some colored pixels on top of the screen.  I’m using a winPE x64 (for driver support reason). I tried with a winPE 32 bits and it starts correctly ! Both are based on winPE ADK for windows 8

 

I tried a winPE x64 for windows 8.1 and I see the following error messages :

 

UNDI API call 0012 failed : status code 006A

Unable to determine UNDI physical device

UNDI API call 0013 failed : status code 006A

UNDI API call 0071 failed : status code 006A

UNDI API call 0005 failed : status code 006A

 

 

Any idea or explanations ?

 

Thanks

 

Nicolas

Attachment (smime.p7s): application/pkcs7-signature, 9 KiB
<div><div class="WordSection1">
<p class="MsoNormal"><span lang="EN-US">Hi, <p></p></span></p>
<p class="MsoNormal"><span lang="EN-US"><p>&nbsp;</p></span></p>
<p class="MsoNormal"><span lang="EN-US">Could this error message be caused by a mass storage driver that is missing ?<p></p></span></p>
<p class="MsoNormal"><span lang="EN-US">If I use the same WinPE 64 with ISO boot, it works<p></p></span></p>
<p class="MsoNormal"><span lang="EN-US"><p>&nbsp;</p></span></p>
<p class="MsoNormal"><span lang="EN-US">Thank you<p></p></span></p>
<p class="MsoNormal"><span lang="EN-US"><p>&nbsp;</p></span></p>
<p class="MsoNormal"><span lang="EN-US">Nicolas<p></p></span></p>
<p class="MsoNormal"><span lang="EN-US"><p>&nbsp;</p></span></p>
<div><div><p class="MsoNormal"><span>De&nbsp;:</span><span> NICOLAS CATTIE - U115440 <br>Envoy&eacute;&nbsp;: mardi 8 juillet 2014 19:47<br>&Agrave;&nbsp;: 'ipxe-devel@...'<br>Objet&nbsp;: undi api call failed<p></p></span></p></div></div>
<p class="MsoNormal"><p>&nbsp;</p></p>
<p class="MsoNormal"><span lang="EN-US">Hi list, <p></p></span></p>
<p class="MsoNormal"><span lang="EN-US"><p>&nbsp;</p></span></p>
<p class="MsoNormal"><span lang="EN-US">I have a new problem on HP ws460 Gen 8 blade workstations : when winpe should start (after chaining with wimboot), I only see some colored pixels on top of the screen. &nbsp;I&rsquo;m using a winPE x64 (for driver support reason). I tried with a winPE 32 bits and it starts correctly ! Both are based on winPE ADK for windows 8<p></p></span></p>
<p class="MsoNormal"><span lang="EN-US"><p>&nbsp;</p></span></p>
<p class="MsoNormal"><span lang="EN-US">I tried a winPE x64 for windows 8.1 and I see the following error messages :<p></p></span></p>
<p class="MsoNormal"><span lang="EN-US"><p>&nbsp;</p></span></p>
<p class="MsoNormal"><span lang="EN-US">UNDI API call 0012 failed : status code 006A<p></p></span></p>
<p class="MsoNormal"><span lang="EN-US">Unable to determine UNDI physical device<p></p></span></p>
<p class="MsoNormal"><span lang="EN-US">UNDI API call 0013 failed : status code 006A<p></p></span></p>
<p class="MsoNormal"><span lang="EN-US">UNDI API call 0071 failed : status code 006A<p></p></span></p>
<p class="MsoNormal"><span lang="EN-US">UNDI API call 0005 failed : status code 006A<p></p></span></p>
<p class="MsoNormal"><span lang="EN-US"><p>&nbsp;</p></span></p>
<p class="MsoNormal"><span lang="EN-US"><p>&nbsp;</p></span></p>
<p class="MsoNormal"><span lang="EN-US">Any idea or explanations ?<p></p></span></p>
<p class="MsoNormal"><span lang="EN-US"><p>&nbsp;</p></span></p>
<p class="MsoNormal"><span lang="EN-US">Thanks<p></p></span></p>
<p class="MsoNormal"><span lang="EN-US"><p>&nbsp;</p></span></p>
<p class="MsoNormal"><span lang="EN-US">Nicolas<p></p></span></p>
</div></div>
Hannes Reinecke | 16 Jul 10:20 2014
Picon

Multiple iSCSI session support

Hi all,

I'm trying to implement multiple iSCSI session support for iSCSI, so 
that I can have an iBFT table with two sessions filled in.
Nice try; sadly the ACPI table gets zeroed out for every call to 
'sanhook' / int13_describe().

Removing the call to 'memset' retains the old ACPI tables and 
everything works as designed.

Can't we have it removed in general?
I would have thought that any function re-writing the ACPI tables 
would fill in _all_ bits, so the (per-interface) functionality 
should be unimpacted.

With the difference that we can now have several calls to
sanhook, and every of them will register itself correctly.

Thoughts?

Cheers,

Hannes
--

-- 
Dr. Hannes Reinecke		      zSeries & Storage
hare@...			      +49 911 74053 688
SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: J. Hawn, J. Guild, F. Imendörffer, HRB 16746 (AG Nürnberg)
Curtis Larsen | 16 Jul 05:39 2014

EFI Text Mode (Apple)

Attached is the output of git diff for the changes necessary to make
sure that EFI console switches to text mode.

Thanks for the help.

Curtis

Attachment (efi-text-mode-patch.diff): text/x-patch, 5586 bytes
Attached is the output of git diff for the changes necessary to make
sure that EFI console switches to text mode.

Thanks for the help.

Curtis

Oliver Rath | 11 Jul 20:26 2014
Picon

addendum [nfs-boot over vlan doesnt work]

Hi list,

I forgot to say, that the 8021q-module is compiled into the kernel, so
vlan should be possible.

Regards
Oliver
Oliver Rath | 11 Jul 20:23 2014
Picon

nfs-boot over vlan doesnt work

Hi list,

I try to boot linux via nfs-boot. At the moment I have two subnets:

192.168.97.0/24 (standard)
182.168.199.0/24 (vlan2 )

I boot ipxe with this embedded script:

#!ipxe
#set cached 1
#:dhcp
#dhcp || shell
vcreate --tag 2 net0
# getting 2nd ip over vlan works fine
ifconf net0-2 || shell
kernel --name vmlinuz
nfs://${net0-2/gateway}/home/oliver/gentoo-phone/boot/vmlinuz-3.15.3-aufsnonPAEsinglecore
|| shell
imgargs vmlinuz ip=none root=/dev/nfs
nfsroot=${net0-2/gateway}:/home/oliver/gentoo-phone/ || shell

# loading kernel and imgargs works
shell

The kernel will be loaded, but tcpdump shows, that the download comes
over eth0, not eth0.2; furthermore linuxkernel hängs while mounting
nfs-root over the given address. the gateway of net0-2 is set correctly.

Booting over normal lan works well.

whats wrong here?

Tfh!
Oliver


Gmane