On the site of any search engine (look for something like tips for developers -> Site Optimization) you will find the line that the most important thing for achieving a higher search engine ranking is the existence of original and interesting for users content. But, given to the fact that for the most users this content's availability directly depends on a search engine, it's never too early to think about how and what with this search engine will work and, if possible, make its task easier.
So, to begin with, let us determine how an average search engine's robot
(spider, crawler, bot, scanner) sees our pages? It turns out almost in the
same way as we can see them in a text browser (see browser Lynx,
which in its time was very popular because of convenience in the combined
usage with screen readers, unfortunately, does not run under every OS). The
users of Firefox are in luck here and can install the expansion Yellow
Pipe Viewer Tool, which will emulate the behavior of Lynx. If you're
able to reach any page of your site by clicking the links in the
text browser, most likely, indexation of your site will not cause any problems
to search engines. The usage of elements such as JavaScript, DHTML, cookies,
Flash, frames can cause difficulties in getting to a page (in a text browser),
and accordingly can prevent search engine's robots from finding such
a page. This does not mean that we should put the major part
of web technologies away, usually we just need to provide a parallel search
engine friendly content.
[By the way don't you think that all the fuss about accessibility features
is caused by a simple fact that what's good for a screen reader is
in exactly the same manner good for a search engine robot? Hypocrites.]