Seán C. Farley | 1 Mar 01:24 2006

Re: VMWARE GSX Port?

On Tue, 28 Feb 2006, Mike Silbersack wrote:

> On Sat, 25 Feb 2006, Scott Long wrote:
>
>> Ashok Shrestha wrote:
>>> VMWARE GSX was released recently for free.
>>> [http://www.vmware.com/news/releases/server_beta.html]
>>> 
>>> Is anyone working on a port for this?
>> 
>> I've started on it, but I haven't made much progress yet.
>
> Anyone who's interested in working on it should make sure to start
> with the VMWare 3 port (which works at present), and Orlando's beta
> 4.5 port:
>
> http://www.break.net/orlando/freebsd.html

Also, check out Wietse Venema's changes[1] done on top of Orlando's
work.

Seán
   1. http://lists.freebsd.org/pipermail/freebsd-emulation/2006-February/001843.html
--

-- 
sean-freebsd <at> farley.org
_______________________________________________
freebsd-hackers <at> freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "freebsd-hackers-unsubscribe <at> freebsd.org"
(Continue reading)

Alex Semenyaka | 1 Mar 04:02 2006
Picon

Re: world's toolchain & CPUTYPE

On Tue, Feb 28, 2006 at 10:19:11AM +0200, Ruslan Ermilov wrote:
> > Isn't is reasonable to add corresponding optional functionality
> > into the buld process?
> No.

Why? :)

> > For example, if -DSTATIC_TOOLCHAIN (or
> > pick any other name) is set, then:
> > 1) build toolchain statically linked
> This is already the case (${XMAKE} has -DNO_SHARED).

Oh, great. Could we also add -DNO_MAKE_CONF then?
Or at least -DTOOLCHAIN_NO_MAKE_CONF :)
That's would be enough. Or do I miss something?

							SY, Alex

_______________________________________________
freebsd-hackers <at> freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "freebsd-hackers-unsubscribe <at> freebsd.org"

Rohit Jalan | 1 Mar 07:17 2006

Re: UMA zone allocator memory fragmentation questions

Hi Robert,

My problem is that I need to enforce a single memory limit
on the total number of pages used by multiple zones.

The limit changes dynamically based on the number of pages 
being used by other non-zone allocations and also on the amount 
of available swap and memory.

I've tried to do the same in various ways with the stock kernel but
I was unsuccessful due to reasons detailed below. In the end I had to
patch the UMA subsystem to achieve my goal. 

Is there a better method of doing the same? Something that would
not involve patching the kernel. Please advise.

----------------------------------------------------------------------
TMPFS uses multiple UMA zones to store filesystem metadata.
These zones are allocated on a per mount basis for reasons described in 
the documentation. Because of fragmentation that can occur in a zone due 
to dynamic allocations and frees, the actual memory in use can be more
than the sum of the contained item sizes. This makes it difficult to 
track and limit the space being used by a filesystem.

Even though the zone API provides scope for custom item constructors 
and destructors the necessary information (nr. pages used) is 
stored inside a keg structure which itself is a part of the opaque 
uma_zone_t object. One could  include <vm/uma_int.h> and access
the keg information in the custom constructor but it would require
messy code to calculate the change delta because one would have to 
(Continue reading)

Ruslan Ermilov | 1 Mar 09:02 2006
Picon

Re: world's toolchain & CPUTYPE

On Wed, Mar 01, 2006 at 06:02:26AM +0300, Alex Semenyaka wrote:
> On Tue, Feb 28, 2006 at 10:19:11AM +0200, Ruslan Ermilov wrote:
> > > Isn't is reasonable to add corresponding optional functionality
> > > into the buld process?
> > No.
> 
> Why? :)
> 
I think I've explained this in the non-quoted here part.

> > > For example, if -DSTATIC_TOOLCHAIN (or
> > > pick any other name) is set, then:
> > > 1) build toolchain statically linked
> > This is already the case (${XMAKE} has -DNO_SHARED).
> 
> Oh, great. Could we also add -DNO_MAKE_CONF then?
> Or at least -DTOOLCHAIN_NO_MAKE_CONF :)
> That's would be enough. Or do I miss something?
> 
What problem are you trying to attack, I fail to see?
-DNO_CPU_CFLAGS is already there, if that's what you mean:

BMAKE=          MAKEOBJDIRPREFIX=${WORLDTMP} \
                ${BMAKEENV} ${MAKE} -f Makefile.inc1 \
                DESTDIR= \
                BOOTSTRAPPING=${OSRELDATE} \
                -DNO_HTML -DNO_INFO -DNO_LINT -DNO_MAN -DNO_NLS -DNO_PIC \
                -DNO_PROFILE -DNO_SHARED -DNO_CPU_CFLAGS -DNO_WARNS
                                         ^^^^^^^^^^^^^^^

(Continue reading)

Ruslan Ermilov | 1 Mar 09:10 2006
Picon

Re: world's toolchain & CPUTYPE

On Mon, Feb 27, 2006 at 01:15:02PM +0300, Yar Tikhiy wrote:
> > What's really fun is tricking the build system so you can cross build
> > on one system, but native install on another from the same tree...
> 
> I wondered, too, if it would be possible to cross-build install
> tools so that they could run on the target system, but I haven't
> investigated this way yet.  Do you have any ideas/recipes?  Thanks!
> 
Well, the tools you want were already built, for the target host.
But you might not be able to install and run them (they may require
a new syscall, some new shared libraries, etc.).  The tools that
you intend to run on host "I" during the install should either be
compiled on this host (using its libraries, preferably statically
linked), or on a compatible host in a compatible build environment.
So it all depends on how similar the hosts "B" and "I" and their
build environments are.

Cheers,
--

-- 
Ruslan Ermilov
ru <at> FreeBSD.org
FreeBSD committer
Josef Karthauser | 1 Mar 13:25 2006
Picon

KGDB not reading my crash dump.

Hi guys,

I've got a crash dump that I'm trying to examine, but kgdb isn't
recognising it:

    genius# kgdb /usr/obj/usr/src/sys/GENIUS2/kernel.debug ./vmcore.12
    kgdb: cannot read PTD
    genius# file vmcore.12
    vmcore.12: ELF 32-bit LSB core file Intel 80386, invalid version (embedded)

Is this a known problem?  Any idea what's up?

Joe

ps. FreeBSD genius.tao.org.uk 6.1-PRERELEASE FreeBSD 6.1-PRERELEASE #26: Fri Feb 17 12:26:21 GMT 2006    
joe <at> genius.tao.org.uk:/usr/obj/usr/src/sys/GENIUS2  i386
Andrey Simonenko | 1 Mar 15:06 2006
Picon

Re: Accessing address space of a process through kld!!

On Tue, Feb 28, 2006 at 01:33:47PM -0500, John Baldwin wrote:
> On Monday 27 February 2006 13:31, John-Mark Gurney wrote:
> > Tanmay wrote this message on Mon, Feb 27, 2006 at 13:56 +0530:
> > > How do I access the address space ie text,data and stack of a (user
> > > level)process whose pid I know from my kld. for eg: Suppose 'vi' is running
> > > and I want to access its address space through my kld, then how do I do it?
> > 
> > You look up the process with pfind(9), and then you can use uio(9) to
> > transfer data into kernel space...  Don't forget to PROC_UNLOCK the
> > struct once you are done referencing it.
> 
> You can use the proc_rwmem() function (it takes a uio and a struct proc)
> to do the actual I/O portion.  You can see example use in the ptrace()
> syscall.

I have two questions about this function:

1.	vm_fault() does not guarantee, that (possibly) faulted in page
	will be in the object or in one of backing objects when
	vm_fault() returns, because a page can become not resident
	again.  Why not to wire needed page in vm_fault() (by giving
	a special flag to vm_fault() function)?

2.	When the object which owns the page is unlocked, which lock
	guarantees, then m will point to a page?  I mean m, which is
	used in vm_page_hold(m), which is called after VM_OBJECT_UNLOCK()
	(I mean a gap of time between VM_OBJECT_UNLOCK() and
	vm_page_lock_queues() function calls).

Can you answer these two question?  Thanks.
(Continue reading)

Robert Watson | 1 Mar 17:57 2006
Picon

Re: UMA zone allocator memory fragmentation questions


On Wed, 1 Mar 2006, Rohit Jalan wrote:

> My problem is that I need to enforce a single memory limit on the total 
> number of pages used by multiple zones.
>
> The limit changes dynamically based on the number of pages being used by 
> other non-zone allocations and also on the amount of available swap and 
> memory.
>
> I've tried to do the same in various ways with the stock kernel but I was 
> unsuccessful due to reasons detailed below. In the end I had to patch the 
> UMA subsystem to achieve my goal.
>
> Is there a better method of doing the same? Something that would not involve 
> patching the kernel. Please advise.

Currently, UMA supports limits on allocation by keg, so if two zones don't 
share the same keg, they won't share the same limit.  Supporting limits shared 
across requires a change as things stand.

On the general topic of how to implement this -- I'm not sure what the best 
approach is.  Your approach gives quite a bit of flexibility.  I wonder, 
though, if it would be better to add an explicit accounting feature rather 
than a more flexible callback feature?  I.e., have a notion of a UMA 
accounting group which can be shared by one or more Keg to impose shared 
limits on multiple kegs?

Something similar to this might also be useful in the mbuf allocator, where we 
currently have quite a few kegs and zones floating around, making implementing 
(Continue reading)

John Baldwin | 1 Mar 16:54 2006
Picon

Re: Accessing address space of a process through kld!!

On Wednesday 01 March 2006 09:06, Andrey Simonenko wrote:
> On Tue, Feb 28, 2006 at 01:33:47PM -0500, John Baldwin wrote:
> > On Monday 27 February 2006 13:31, John-Mark Gurney wrote:
> > > Tanmay wrote this message on Mon, Feb 27, 2006 at 13:56 +0530:
> > > > How do I access the address space ie text,data and stack of a (user
> > > > level)process whose pid I know from my kld. for eg: Suppose 'vi' is running
> > > > and I want to access its address space through my kld, then how do I do it?
> > > 
> > > You look up the process with pfind(9), and then you can use uio(9) to
> > > transfer data into kernel space...  Don't forget to PROC_UNLOCK the
> > > struct once you are done referencing it.
> > 
> > You can use the proc_rwmem() function (it takes a uio and a struct proc)
> > to do the actual I/O portion.  You can see example use in the ptrace()
> > syscall.
> 
> I have two questions about this function:
> 
> 1.	vm_fault() does not guarantee, that (possibly) faulted in page
> 	will be in the object or in one of backing objects when
> 	vm_fault() returns, because a page can become not resident
> 	again.  Why not to wire needed page in vm_fault() (by giving
> 	a special flag to vm_fault() function)?
> 
> 2.	When the object which owns the page is unlocked, which lock
> 	guarantees, then m will point to a page?  I mean m, which is
> 	used in vm_page_hold(m), which is called after VM_OBJECT_UNLOCK()
> 	(I mean a gap of time between VM_OBJECT_UNLOCK() and
> 	vm_page_lock_queues() function calls).
> 
(Continue reading)

Rohit Jalan | 1 Mar 19:20 2006

Re: UMA zone allocator memory fragmentation questions

> 
> On the general topic of how to implement this -- I'm not sure what the best 
> approach is.  Your approach gives quite a bit of flexibility.  I wonder, 
> though, if it would be better to add an explicit accounting feature rather 
> than a more flexible callback feature?  I.e., have a notion of a UMA 
> accounting group which can be shared by one or more Keg to impose shared 
> limits on multiple kegs?
> 

The addition of an accounting feature would be nice but 
would that not increase lock contention? 

Specifically, UMA will need to do locking if it manages
shared information but some API users may not want to pay this
penalty if their shared-limit zones are accessed in an orderly
manner or if the user for some reason locks the parent data 
structures in which all the shared-limit zones are contained 
before doing zone operations.

And even then without a callback it will not be feasible 
to implement a dynamic limit.

E.g.,

TMPFS_PAGES_MAX() is a macro that uses vmmeter and user specified 
parameters to provide a dynamic limit.

static void*
tmpfs_zone_page_alloc(uma_zone_t zone, int size, uint8_t *pflag, int wait,
    void *arg)
(Continue reading)


Gmane