We've seen Gbytes of Simics going to swap because of other "memory
hogs". After a while, when the memory hog program finishes and the
simulation is left alone again in the machine, almost none of this
swapped memory is claimed back. I mean, it stays in swap until the end
of the simulation, so even though it's reachable, it's not used.
I'll tell you (and the list) how it goes when I have my experiment
ready. Regards,
Javi
Dan Gibson wrote:
> Ditto. However, that only means that all memory is reachable and
> reclaimed at the end of execution. It doesn't mean that a large amount
> of memory isn't effectively dead.
>
> Regards,
> Dan
>
> On Tue, Dec 1, 2009 at 11:36 AM, Derek Hower <drh5@xxxxxxxxxxx> wrote:
> Just to add a bit to the discussion, I've run Ruby under
> Valgrind
> before and no leaks are reported.
>
> -Derek
>
>
> On Tue, Dec 1, 2009 at 11:05 AM, Dan Gibson
> <degibson@xxxxxxxx> wrote:
> > Javier,
> > As far as Ruby is concerned, It would be safe to reclaim
> memory used by a
> > block when its no longer cache anywhere on-chip(s). In other
> words, if that
> > block was ever accessed again, the DirectoryEntry
> constructor would restore
> > the same state. We've never bothered to implement block
> recycling because
> > the memory usage of Simics has always been large anyway
> (though it can be
> > helped with set-memory-limit), and its difficult to tell
> when blocks really
> > are no longer cached (e.g., it is protocol specific).
> >
> > Let me, and the list, know how your experiment goes with
> freeing old blocks.
> > I've poked around a bit myself and haven't found any other
> likely culprits
> > (that is not proof that they dont' exist!) -- but
> anecdotally, Ruby is
> > stable to run for weeks at a time in some of my longer
> simulations.
> >
> > You might try is allocating a whole bunch of DirectoryEntry
> objects to
> > figure out how quickly the heap grows as a function of
> number of DE
> > allocations. You can then plot the number of true DE
> allocations in a real
> > Ruby run and see what the covariance is between the observed
> heap growth
> > rates.
> >
> > As far as the history of InvalidateBlock, I'm unsure. Most
> of my work has
> > been in the lowest levels of Ruby near the Simics API.
> Perhaps the rest of
> > the list can shed more light on that.
> >
> > Good luck.
> >
> > Regards,
> > Dan
> >
> > 2009/12/1 Javi Merino <jmerino@xxxxxxxxxxxxx>
> >>
> >> Dan Gibson wrote:
> >> > Javier,
> >> > Having made that same graph on a couple of occasions
> myself, I can say
> >> > with some confidence that this is correct behavior, and
> not a leak.
> >> >
> >> > The phenomenon you're observing is an artifact of how
> Ruby implements
> >> > 'arbitrary' coherence. Whenever a memory block is
> accessed for the
> >> > first time, a directory entry is allocated for that line,
> which will
> >> > be used to track some state relating to this block (see
> your
> >> > protocol's Directory_Entry). This happens in
> DirectoryMemory::lookup.
> >> > These blocks are never reclaimed.
> >>
> >> There's a DirectoryMemory::invalidateBlock() that's been
> commented out
> >> since (at least) GEMS 1.4. If the "memory leak" is just
> DirectoryEntries
> >> that are never freed, then deleting the entry when the
> coherence
> >> protocol no longer needs it will keep memory from growing
> too much,
> >> right?
> >>
> >> >
> >> > Therefore, Ruby's memory allocation can continue to grow
> until the
> >> > target system has accessed every block in its available
> memory.
> >> >
> >> > I see in your post that you are using an application with
> a fixed
> >> > working set. After touching all of it, I would expect
> memory usage to
> >> > plateau for the most part, except for other system
> activity, a
> >> > behavior that I have seen myself -- it also seems to
> happen at about
> >> > X=800M cycles in your graph. Bear in mind that even a
> 'controlled'
> >> > application isn't the only thing running on a full-system
> simulator,
> >> > and Ruby's memory will still grow, inexorably, due to
> system activity
> >> > and interference from other processes.
> >>
> >> I am aware that I'm running a full-system simulator, but it
> seems too
> >> much memory just for system activity. I mean, after the
> 800M cycles
> >> mark, the memory grows more than 100Mbytes. The
> DirectoryEntry for this
> >> protocol is:
> >> * State: an enum, 4 bytes.
> >> * DataBlock: DATA_BLOCK is false, so this is the size of
> an empty
> >> Vector, around 16 bytes.
> >> * int: 4 bytes
> >> * 2 Sets: around 24 bytes each.
> >>
> >> This adds up to 48 bytes per DirectoryEntry and each of
> them is
> >> simulating 64bytes of memory. I know Simics is also
> allocating memory to
> >> simulate this, but still, system activity shouldn't be
> noticeable in
> >> terms of total memory used.
> >>
> >> I think I'm going to try to call
> DirectoryMemory::invalidateBlock()
> >> whenever the coherence protocol doesn't need it any more.
> In Token
> >> coherence it's easy: when memory has all the tokens Ruby
> can delete the
> >> DirectoryEntry.
> >>
> >> Was invalidateBlock() ever used? Do you have any idea why
> it's not being
> >> used right now? I know it's not necessary to have a correct
> simulation,
> >> but I'm curious about why the code is there but commented.
> >>
> >> > However, just to cover all our bases, try running Ruby's
> tester.
> >> > Ruby's tester limits the number of actually addresses
> used to a
> >> > handful, so the memory footprint should be more stable.
> Even then, I
> >> > wouldn't be surprised if you still saw some inflation due
> to stats
> >> > gathering (e.g., some maps in Profiler.C).
> >>
> >> I haven't tried it. I'll run it as well and see what we
> get.
> >>
> >> Thank you,
> >> Javier Merino
> >>
> >> > Regards,
> >> > Dan
> >> >
> >> > 2009/11/30 Javi Merino <jmerino@xxxxxxxxxxxxx>
> >> > Hi, when we run long simulations using GEMS
> +Simics, we get a
> >> > very big
> >> > memory footprint. The attached file shows the
> total memory
> >> > used by GEMS
> >> > +Simics during the simulation of one iteration of
> IS.B. We
> >> > used GEMS 2.1
> >> > with MOESI_CMP_token and the default
> configuration. The only
> >> > parameter
> >> > we modified is the number of processors (8). It
> is ruby+opal,
> >> > but when
> >> > we simulate without ruby, the memory used is more
> or less
> >> > constant.
> >> >
> >> > If we simulate two or three iterations of this
> application,
> >> > the memory
> >> > used keeps increasing, even though the working
> set of the
> >> > simulated
> >> > application is the same across iterations.
> >> >
> >> > The memory controller also uses a lot of memory,
> but it
> >> > allocates it at
> >> > the beginning of the simulation.
> >> >
> >> > Do you have any idea on where is the memory leak?
> It would be
> >> > great if
> >> > we could keep the memory used under control and
> all GEMS users
> >> > could
> >> > benefit from that. Regards,
> >> > Javier Merino
> >> >
> >> > _______________________________________________
> >> > Gems-users mailing list
> >> > Gems-users@xxxxxxxxxxx
> >> >
> https://lists.cs.wisc.edu/mailman/listinfo/gems-users
> >> > Use Google to search the GEMS Users mailing list
> by adding
> >> >
> "site:https://lists.cs.wisc.edu/archive/gems-users/" to your
> >> > search.
> >> >
> >> >
> >> >
> >> >
> >> >
> >> > --
> >> > http://www.cs.wisc.edu/~gibson [esc]:wq!
> >> > _______________________________________________
> >> > Gems-users mailing list
> >> > Gems-users@xxxxxxxxxxx
> >> > https://lists.cs.wisc.edu/mailman/listinfo/gems-users
> >> > Use Google to search the GEMS Users mailing list by
> adding
> >> > "site:https://lists.cs.wisc.edu/archive/gems-users/" to
> your search.
> >> >
> >>
> >> _______________________________________________
> >> Gems-users mailing list
> >> Gems-users@xxxxxxxxxxx
> >> https://lists.cs.wisc.edu/mailman/listinfo/gems-users
> >> Use Google to search the GEMS Users mailing list by adding
> >> "site:https://lists.cs.wisc.edu/archive/gems-users/" to
> your search.
> >>
> >>
> >
> >
> >
> > --
> > http://www.cs.wisc.edu/~gibson [esc]:wq!
> >
> > _______________________________________________
> > Gems-users mailing list
> > Gems-users@xxxxxxxxxxx
> > https://lists.cs.wisc.edu/mailman/listinfo/gems-users
> > Use Google to search the GEMS Users mailing list by adding
> > "site:https://lists.cs.wisc.edu/archive/gems-users/" to your
> search.
> >
> >
> >
> _______________________________________________
> Gems-users mailing list
> Gems-users@xxxxxxxxxxx
> https://lists.cs.wisc.edu/mailman/listinfo/gems-users
> Use Google to search the GEMS Users mailing list by adding
> "site:https://lists.cs.wisc.edu/archive/gems-users/" to your
> search.
>
>
>
>
>
> --
> http://www.cs.wisc.edu/~gibson [esc]:wq!
> _______________________________________________
> Gems-users mailing list
> Gems-users@xxxxxxxxxxx
> https://lists.cs.wisc.edu/mailman/listinfo/gems-users
> Use Google to search the GEMS Users mailing list by adding "site:https://lists.cs.wisc.edu/archive/gems-users/" to your search.
>
Attachment:
signature.asc
Description: Esta parte del mensaje =?ISO-8859-1?Q?est=E1?= firmada digitalmente
|