A referer is the way how visitors come to your site (and thus creating a new visit). New visits coming from a visit that have timed out (so the referer is the your web site itself) are not counted. This is why the number of refered visits can be different as the number of visits refered in the general report.
The referer are separated in two groups: "product" and "solution". The "product" group contains the visitors who are lookging for products you sell or for your compagny. The "solution" group contains the visitors who are looking for a solution for a specific problem, but not a specific product (for example: a plumber). Visitors coming from the "product" group are usually more valuable than visitors coming from the "solution" group because they already know you/your products.
By default, all refers defined in the referer_database.xml file are in the "solution" group. Using the "ProductRefererGroups" tag in the "config.xml" file, you can move some of them in the "product" group. The "product" group usually contains the visitors coming from bookmarks and direct URL and search engines with specific keywords.
Referers are shown only for human visitors. Robots are most of the time use a "de-duplicated links list" (list of unique links without referer) to crawl the web, so they have mostly no referer:
I seems to me that it's doing exactly what other 'bots do: following links on known pages to discover other not-yet-known pages.
What's different is that it *tells* you how it found your page -- in that it provides a referrer string, whereas most 'bots don't. While a sophisticated 'bot might have a three-phase approach where it first collects many links and puts them into a database, removes duplicate links, and then pulls that set of de-duplicated links from its database and spiders them, later repeating the process, this bot apparently follows newly-discovered links immediately -- an approach better-suited to small indexes than to large.
Using the de-duplication method I posit for spiders like Googlebot above, they can remove many duplicate links before spidering, whereas if you spider links immediately, you won't have any method to "remember" previously-spidered pages that have several links pointed to them. So, you might follow multiple links to the same page. But on the other hand, with this method you can provide a meaningful referrer string for each page that you spider.
So, it's interesting that this 'bot supplies a referrer, and it would be interesting to see if you can detect duplicate incoming links by observing becomebot spidering the *same* page on your site using several *different* referrers.
This de-duplication method, by the way, is the reason that most spiders don't provide a referrer; They may be spidering your page because they found *several* references to it -- maybe even require that several pages refer to your page before they will spider it. If they do have several referrers for your page, then obviously, they'd have to pick which referrer to provide to you -- an extra step in the process, so I suspect that most simply don't bother.
jdMorgan
There is a small number of visits recognized as "product"-referers that are in fact "solution"-referer: mostly the directories that are not giving a referer.