arvid | 2 Aug 05:25 2010
Picon
Picon

Re: Dynamically unload and reload torrent data

Quoting imin imup <iminimup <at> gmail.com>:

> Hello Arvid,
> Destructing/re-constructing piece manager is implemented and under testing.

Cool, that's awesome. Which branch are you working on again? (0.14, 0.15 or
trunk?).

> Since the torrent files in my application can be Mega bytes large. I'm
> looking at how to unload /reload info section. Do these two variables holds
> 2 copies of info section? i.e., m_into_section holds a raw string version
> and m_info_dict holds a parsed version.
>         boost::shared_array<char> m_info_section;
>         mutable lazy_entry m_info_dict;

No, just one copy. All the memory is kept in m_info_section, and then a parsed
tree-structure with pointers into the m_info_section memory is held by
m_info_dict.

I think it might be a good idea to replace the m_torrent_file (i.e. the
torrent_info object in the torrent class) with a new torrent_info constructed
only with the info-hash, when unloading. This is the state torrents are in when
they are first loaded from a magnet link and haven't downloaded the metadata
yet.

This state is a reliable state to keep the torrent in (because any function that
needs more than the info hash out of m_torrent_file which check).

Obviously you would want to reload it once the torrent starts up again so that
the metadata isn't actually downloaded from the swarm for no reason.
(Continue reading)

arvid | 2 Aug 05:10 2010
Picon
Picon

Re: Libtorrent get_peer_info() function not getting the pieces field

Quoting pranith p r naidu <pranithpr <at> gmail.com>:

> Hello Folks,
> 
> I was trying to use the libtorrent libraries and was stuck up on a
> particular point.
> 
> I want the information about what all pieces does each participating client
> has got in a Bittorrent Swarm.
> So I used the function get_peer_info(std::vector<peer_info>& peers) to get
> the list of peers and for each peer I tried using the field "Bitfield
> pieces;" to get information about which pieces each of the peer in that list
> has got.But the all the bits in the pieces is set to 1 and it does not
> provide me the information I wanted.

Are you sure that's incorrect? Maybe all your peers are seeds.
Why do you think your peers are not seeds?

> Is there any thing more I have to do get the pieces information of each peer
> in the peer list?

Nope, that should be it.

--

-- 
Arvid Norberg

------------------------------------------------------------------------------
The Palm PDK Hot Apps Program offers developers who use the
Plug-In Development Kit to bring their C/C++ apps to Palm for a share
of $1 Million in cash or HP Products. Visit us here for more details:
(Continue reading)

imin imup | 2 Aug 18:03 2010
Picon

Re: Dynamically unload and reload torrent data

> Cool, that's awesome. Which branch are you working on again? (0.14, 0.15 or
> trunk?).
>
Code base is 0.15 .

>
> > Since the torrent files in my application can be Mega bytes large. I'm
> > looking at how to unload /reload info section. Do these two variables
> holds
> > 2 copies of info section? i.e., m_into_section holds a raw string version
> > and m_info_dict holds a parsed version.
> >         boost::shared_array<char> m_info_section;
> >         mutable lazy_entry m_info_dict;
>
> No, just one copy. All the memory is kept in m_info_section, and then a
> parsed
> tree-structure with pointers into the m_info_section memory is held by
> m_info_dict.
>
> I think it might be a good idea to replace the m_torrent_file (i.e. the
> torrent_info object in the torrent class) with a new torrent_info
> constructed
> only with the info-hash, when unloading. This is the state torrents are in
> when
> they are first loaded from a magnet link and haven't downloaded the
> metadata
> yet.
>
This appears to be a better idea. Indeed it's already found my current
"incompatible" incomplete implementation causes seg-fault when torrent_info
(Continue reading)

arvid | 2 Aug 19:40 2010
Picon
Picon

Re: Dynamically unload and reload torrent data

Quoting imin imup <iminimup <at> gmail.com>:
> > Both would have
> > to be exposed through the torrent_handle class.
> 
> It can be done but it seems unnecessary for this job because torrent plugin
> directly uses torrent class, it does not use torrent handle. Is this
> correct?

I see, that makes sense.

--

-- 
Arvid Norberg

------------------------------------------------------------------------------
The Palm PDK Hot Apps Program offers developers who use the
Plug-In Development Kit to bring their C/C++ apps to Palm for a share
of $1 Million in cash or HP Products. Visit us here for more details:
http://p.sf.net/sfu/dev2dev-palm
pranith p r naidu | 3 Aug 07:43 2010
Picon

Re: Libtorrent get_peer_info() function not getting the pieces field

Yes,

As it turns out to be all the Peers were seeds.I checked for this condition
and got the information I needed.Thanks Arvid

On Sun, Aug 1, 2010 at 11:10 PM, <arvid <at> cs.umu.se> wrote:

> Quoting pranith p r naidu <pranithpr <at> gmail.com>:
>
> > Hello Folks,
> >
> > I was trying to use the libtorrent libraries and was stuck up on a
> > particular point.
> >
> > I want the information about what all pieces does each participating
> client
> > has got in a Bittorrent Swarm.
> > So I used the function get_peer_info(std::vector<peer_info>& peers) to
> get
> > the list of peers and for each peer I tried using the field "Bitfield
> > pieces;" to get information about which pieces each of the peer in that
> list
> > has got.But the all the bits in the pieces is set to 1 and it does not
> > provide me the information I wanted.
>
> Are you sure that's incorrect? Maybe all your peers are seeds.
> Why do you think your peers are not seeds?
>
> > Is there any thing more I have to do get the pieces information of each
> peer
(Continue reading)

imin imup | 4 Aug 22:03 2010
Picon

Re: Dynamically unload and reload torrent data

Hello Arvid,

I've got 1 issue and 1 question.
I haven't seem memory reduction after I tried to destruct piece manager. I
found the refcount of piece manager (m_owning_storage) is 3 before I set the
intrusive pointer to 0 in torret::mem_unload(). My guess is the refcount is
2 after I set the intrusive pointer to 0. Is there any other object who is
holding a reference to piece manager?

The question is, when a torrent is loaded in "paused, auto-managed" setting,
will the on_pause() in torrent plugin be called? I hope it is yes. The
concern is, if not, the dynamic unloading won't have a chance to work for
the very first time.

BR
Imin
------------------------------------------------------------------------------
The Palm PDK Hot Apps Program offers developers who use the
Plug-In Development Kit to bring their C/C++ apps to Palm for a share
of $1 Million in cash or HP Products. Visit us here for more details:
http://p.sf.net/sfu/dev2dev-palm
arvid | 5 Aug 02:15 2010
Picon
Picon

Re: Dynamically unload and reload torrent data

Quoting imin imup <iminimup <at> gmail.com>:

> Hello Arvid,
> 
> I've got 1 issue and 1 question.
> I haven't seem memory reduction after I tried to destruct piece manager. I
> found the refcount of piece manager (m_owning_storage) is 3 before I set the
> intrusive pointer to 0 in torret::mem_unload(). My guess is the refcount is
> 2 after I set the intrusive pointer to 0. Is there any other object who is
> holding a reference to piece manager?

The disk_io_jobs and the disk cache holds references to it as well. You should
clear the cache before you drop that reference. You can do this by calling
async_release_files() on it.

> The question is, when a torrent is loaded in "paused, auto-managed" setting,
> will the on_pause() in torrent plugin be called? I hope it is yes. The
> concern is, if not, the dynamic unloading won't have a chance to work for
> the very first time.

It's not. It wouldn't be hard to add though.

--

-- 
Arvid Norberg

------------------------------------------------------------------------------
The Palm PDK Hot Apps Program offers developers who use the
Plug-In Development Kit to bring their C/C++ apps to Palm for a share
of $1 Million in cash or HP Products. Visit us here for more details:
http://p.sf.net/sfu/dev2dev-palm
(Continue reading)

Thomas Krenc | 6 Aug 22:51 2010
Picon

question on function find_connect_candidate (policy.cpp)


hi arvid and all,

i was performing some large scale tests by connecting to each peer of big swarms. unfortunately there was
something wrong with the number of current connections. so i went through the code and found the following
piece of code in policy.cpp in lines 551/552 of the current version (15.0) in function find_connect_candidate():

        for (int iterations = (std::min)(int(m_peers.size()), 300);
            iterations > 0; --iterations)

as i get it right find_connect_candidates is supposed to go through the m_peers list and search for new
candidates to connect to (by certain criterias such as !allready_connected, !is_blocked etc.). thats
why i could not make any sense of the number 300 in the for header. this limits the number of peers that can be
found in m_peers and finally returns no candidate until m_peers change in the first 300 elements, right?

when ive changed the for header to:

        for (int iterations = (int)m_peers.size(); iterations > 0; --iterations)

everything worked just fine and had the desired amount of connections.

what is this 300 intended for?

kind regards
thomas
 		 	   		  
------------------------------------------------------------------------------
This SF.net email is sponsored by 

Make an app they can't live without
(Continue reading)

imin imup | 6 Aug 23:54 2010
Picon

Re: Dynamically unload and reload torrent data

> > I haven't seem memory reduction after I tried to destruct piece manager.
> I
> > found the refcount of piece manager (m_owning_storage) is 3 before I set
> the
> > intrusive pointer to 0 in torret::mem_unload(). My guess is the refcount
> is
> > 2 after I set the intrusive pointer to 0. Is there any other object who
> is
> > holding a reference to piece manager?
>
> The disk_io_jobs and the disk cache holds references to it as well. You
> should
> clear the cache before you drop that reference. You can do this by calling
> async_release_files() on it.
>
> Since the plugin callback on_pause() was moved to AFTER your default
internal processing, it appears your following code already does the job you
described before the the plugin callback on_pause() is called:
         // this will make the storage close all
        // files and flush all cached data
        if (m_owning_storage.get())  //<------------- non-zero because not
reset yet.
        {
            TORRENT_ASSERT(m_storage);
            m_storage->async_release_files(
                bind(&torrent::on_torrent_paused, shared_from_this(), _1,
_2));
            m_storage->async_clear_read_cache();
        }

(Continue reading)

arvid | 7 Aug 16:58 2010
Picon
Picon

Re: question on function find_connect_candidate (policy.cpp)

Quoting Thomas Krenc <krenc <at> cs.tu-berlin.de>:

> hi arvid and all,
> 
> i was performing some large scale tests by connecting to each peer of big
> swarms. unfortunately there was something wrong with the number of current
> connections. so i went through the code and found the following piece of code
> in policy.cpp in lines 551/552 of the current version (15.0) in function
> find_connect_candidate():
> 
>         for (int iterations = (std::min)(int(m_peers.size()), 300);
>             iterations > 0; --iterations)
> 
> as i get it right find_connect_candidates is supposed to go through the
> m_peers list and search for new candidates to connect to (by certain
> criterias such as !allready_connected, !is_blocked etc.). thats why i could
> not make any sense of the number 300 in the for header. this limits the
> number of peers that can be found in m_peers and finally returns no candidate
> until m_peers change in the first 300 elements, right?

No, it's not limited to the first 300 elements (at least not intentionally). The
300 number is a way to limit the amount of CPU is spent finding a connect
candidate. If your peer list is extremely big (say, tens of thousands) it may be
expensive to go through the whole list of every peer to connect to. The way it's
intended to work is to scan 300 peers at a time and pick the best one. That way
the cost of connecting a peer is bound, and scales well with huge swarms.

> when ive changed the for header to:
> 
>         for (int iterations = (int)m_peers.size(); iterations > 0;
(Continue reading)


Gmane