Table of contents [showhide]
1 Adwords
2 Adsense
3 Apache
4 API
5 API Key
6 Backlink
7 Blog
8 Clickthrough
9 CPA
10 CPC
11 CPM
12 CPQ(L)
13 Critical Mass
14 CSS
15 CTR
16 Deep Crawl
17 Directory
18 Google
19 Googlebot
20 Googlism
21 HTML
22 Inbound Links
23 Index
24 IP Address
25 ISP
26 Log
27 MFA
28 Open Source
29 PageRank
30 Proxy Server
31 Reciprocal Links
32 robots.txt
33 Root
34 SEO
35 Search Engine
36 SERPs
37 Session ID
38 Scraping
39 Spam
40 Spider
41 Visual Confirmation
42 Web Server
43 Wiki
44 XHTML
45 XML
Adwords
Google's advertising network, where advertisers can pay a certain
amount for each click (see CPC). You can setup how much you want
to spend per day on advertising, and create several custom text
ads, which will rotate through the search engine results and/or
content pages as well (see Adsense),
Adsense
Google's money making program for webmasters, where you can show
targeted, relevant ads on your web sites. Commission is paid
out at 50% of the total revenue for a click. Signup is free,
and only requires a tax payer id, or social security number.
Google will then write you a check each month, or as soon as
you earn over $100. See Google Adsense (http://www.google.com/adsense)
Apache
One of the most popular open source web servers on the internet.
More info at apache.org (http://www.apache.org)
API
API stands for Application Program Interface. In it's simplest
definition, an API is a means by which one program can exchange
information with another program via it's API.
API Key
An API key is typically a unique id given upon registration for
usage of an API, such as Google's API. The key identifies the
user, and in Google's case, serves as a means of tracking the
user, as well as limiting the user to 1,000 queries per day.
Backlink
Backlink (also referred to as an "incoming link" or "reciprocal
link") refers to the number of web pages in Google (and other
engines), that have a link to your site. For example, you can type "link:www.organicseo.org
(http://tinyurl.com/7safk)" in Google to determine how many
pages link to your site, which inturn will have a direct effect
on your ranking in the SERPs.
Blog
Simply put, a blog (or weblog) is an online diary or journal. There
are many options out there for bloggers, including blogger.com
(http://www.blogger.com) and others, that allow for a user to
update their blog on a daily basis. Blogs usually revolve around
a topic or interest that is held by the owner, although they
can also be rants or other informative postings. Because of the
dynamic nature of blogs, Googlebot tends to visit these sites
more often since the content is updated regularly.
Clickthrough
A clickthrough (or clickthru) refers to the act of a visitor actually
clicking on an advertisement and following through to the advertiser's
web site.
CPA
CPA is short for "Cost Per Acquisition". Simply put,
it means you get paid for every sale you generate promoting a product.
So you would advertise a product whichever way you would choose:
banner, text ad, link, etc. For every sale that your marketing
generates, you earn a percentage, typically 40-50%. In theory,
this allows for you to make more money then charging a flat CPM
rate for advertising.
CPC
CPC (Cost Per Click) refers to the amount of money an advertiser
will pay each time someone clicks on their ad. For example, the
word "casino" in Google Adwords may cost an advertiser
upwards of $12 per click.
CPM
CPM stands for Cost Per Million, which used to be used to describe
the amount of money paid for 1 million page impressions. However,
since it was determined in the dotcom days that impressions could
easily be faked, the industry opted to use CPM to refer to Cost
Per Thousand clickthroughs.
CPQ(L)
Cost Per Qualified Lead (CPQL) refers to the cost of your advertising
campaign compared to the advertising spent divided by the number
of "qualified" referals. If you spent $100 and receive
4 qualifed leads, then your CPQ is $25. Typically, the CTR rate
is much higher than the CPQ rate, as not everyone who clicks
on your ad will end up qualifying as a potential client or customer.
Critical Mass
Critical mass is used to describe the state of an interactive site,
in which it's users start to really contribute and visit the
site on a regular basis. It's often used to describe the early
stage of an online forum or community.
CSS
CSS stands for Cascading Style Sheets, and is used to seperate
content from presentation in a web page. Typically contained
within a seperate external file (ie - style.css), you can control
the look of a page, from font sizes and colors, to positioning
of elements on the web page. For more info, see W3 Schools (http://www.w3schools.com/css/default.asp)
CTR
Click Through Rate (CTR) refers to the total number of impressions
divided by the total number of clickthrus. For example, if you
had 10,000 banner impressions that yielded 200 clicks on your
ad, you would have a CTR of 2%. It's difficult to say what a
good clickthru rate is, as it varies from site and product. Typically,
2-4% is considered pretty good - the more targeted your ads are,
the greater chance you'll have of increasing your clickthru rate.
Deep Crawl
A deep crawl is when a spider, such as Googlebot crawls the entire
web site. Typically, Googlebot will index only the first page
(or home page) the first time it encounters a new site not already
in it's index. It can take up to 1 month for Googlebot to return
and do a deep crawl, following all the interlinked pages on a
particular web site.
Directory
A directory is simply a web site, which maintains a collection
of other web sites that are submitted by people, categorized
by content. A listing in a directory usually contains a link,
with a descriptive title, and a brief description of the web
site listed. The most well known free directory is Dmoz (http://www.dmoz.org).
There are many, many directories on the web which are big and
small alike, some are broad in classification, like Dmoz, while
others are dedicated to specific topics, for example a "business
directory". Your best bet is to submit your site to these
directories, rather than search engines.
Google
The world's most successful search engine, Google (http://www.google.com)
has grown by leaps and bounds in the last several years, doubling
their number of indexed pages to over 8 billion. It's the most
sought after search engine for placement by SEO masters.
Googlebot
One of the more well known spiders owned and operating by Google
(http://www.google.com), which crawl the web, indexing pages.
Googlism
Google offers several little known tricks or functions you can
type into the search field, also known as "googlisms" for
retrieving information about a web site. Ranging from the number
of backlinks, to the number of pages indexed. See Googlisms for
more info.
HTML
HTML stands for Hyper Text Markup Language. HTML is the standardized
way of laying out the content in a web page. More recently, XHTML
has been gaining ground over HTML as a more structured markup
language. See HTML on Wikipedia.org (http://en.wikipedia.org/wiki/HTML)
Inbound Links
See Backlinks
Index
This refers to the database of a search engine company that stores
all the content on the web it's spiders have crawled. The spider's
actions are commonly refered to as "indexing the web".
IP Address
The IP (Internet Protocol) address is a unique number every computer
on a network has. Think of it as a phone number for your computer,
used to identify your ISP and your computer. People engauging
is malicious activities have often been traced by their IP number.
See Internet Protocal (http://en.wikipedia.org/wiki/Internet_Protocol)
on Wikipedia.org for more info.
ISP
ISP stnnds for Internet Service Provider. An ISP can be the means
you use to connect to the Internet (examples include Comcast,
SBC Dsl, or AOL), or even the company from which your purchase
your web server or hosting provider.
Log
A log file refers the a file that is created by a program as a
means of tracking what is happening during the usage of that
program. For SEO, your most important tool is your web server's
log file. It will provide you with information about your visitors,
ranging from pages accessed and times, to which search phrases
and search engines they are using. See statistics.
MFA
Made For Adsense (MFA), in reference to questionably spammy web
sites that are made for the sole intention of converting Google
Adsense ads.
Open Source
Open source refers to a methodology of software development. It's
strength lies in the act of giving away a software package for
free, including it's source code which is available for anyone
to peruse and improve. The software can be re-distributed among
licenses such as the GPL (http://en.wikipedia.org/wiki/GPL),
which requires that you can modify the code and re-distribute
as long as those same rights are also passed along. Many open
source projects are maintained by a vast community of volunteer
software developers. Projects of note are FireFox (http://www.mozilla.org)
and Linux (http://www.linux.org).
PageRank
Google PageRank refers to the number between 1 and 10 (10 being
the highest) that Google assigns a page. You can download the
Google Toolbar (http://toolbar.google.com) for Internet Explorer
or GoogleBar w/PageRank (http://www.prgooglebar.org) for FireFox
(http://www.mozilla.org)
Proxy Server
A proxy server is a server connected to the internet that will
forward a request for you to the final destination, usually HTTP
(http://en.wikipedia.org/wiki/Http). The reason people use a
proxy server is to cover up the originator's IP address. You
can send your request through a second proxy server, or even
a third server, and the results of the request are forwarded
back through the servers to you. Thus querying google.com would
make it look like you were coming from the proxy server's ip
address, not your own. Nefarious tools use proxy servers, as
well as people who want to remain anonymous. Typical uses are
for scraping web sites.
Reciprocal Links
See Backlinks
robots.txt
The robots.txt is a text file which you can place anywhere on your
server (within htdocs), to limit access by spiders to a particular
directory. robots.txt is only used by spiders that support it.
Screen scraping commonly does not look for a robots.txt file.
Root
Root describes the "superuser" on a Linux or Unix-based
systems. Usually only the administrators of the server have access
to this account. It allows full control of everything on the server.
Sometimes it comes up in SEO because certain tweaks are necessary
on the server, for example URL rewriting in Apache.
SEO
SEO is an acronym commonly used to abbreviate "Search Engine
Optimization". The act of optimizing a web site for higher
rankings in search engines.
Search Engine
A search engine refers to a web site that is dedicated to indexing
the web via spiders, providing it's users with a searchable database
of content from the web. Search engines of note are Google (http://www.google.com),
Yahoo! (http://search.yahoo.com/), and MSN (http://search.msn.com).
SERPs
Search Engine Result Placements (SERPs) is commonly used to describe
how well you're doing in the search engines for a particular
phrase. For example, one might say "I'm ranking #2 for the
phrase 'seo wiki' in Google and Yahoo SERPs."
Session ID
A session id is a unique identifier assigned to a visitor, usually
as a parameter in the url ("/index.php?sid=234jiod08ekjd08pje")
or as a cookie. Spiders tend to get caught up in sites with session
ids, and are now actively weeding out sites that use session
ids in the url.
Scraping
Scraping refers to the act of parsing results from a web site in
an automated process. Typically, this can be a Perl script using
LWP::UserAgent and HTTP::Request modules to grab the content
from a server. It's a somewhat questionable practice to scrape
web sites, unless you have permission from the copyright owner
beforehand.
Spam
Most people think of spam as unsolicited email. Spam can also be
used as unsolicited postings on blogs, forums and guestbooks.
This is quite common if you have a free link section on your
web site, or a forum in which people can register and post links
on a web site. Counter measures include visual confirmation in
order to post, so as to keep people from writing automated scripts
(non-organic methods) that will look for exploitable software.
Spider
Also known as "robots", "bots", or "crawlers".
These are programs search engines use to index the web. They typically
start at a location, or many, and follow all the links on a page
it's indexing. As they go about their crawling, they undoubtedly
find new pages and web sites, and continue to follow all those
links as well. Eventually building up a huge database that contains
all the pages and sites indexed by the spider. Googlebot is one
example of a spider, which crawls the web 24/7 indexing the internet
for the Google search engine.
Visual Confirmation
Visual confirmation refers to a counter measure that can be implemented
on a submission form to discourage automated spam submissions.
Typically it consists of an image with a set of multi-case letters
or word. The person then needs to enter the letter/number sequence
from the image into a form field in order to continue. This is
common practice on forums and blogs, which would stop automated
scripts from submitting and getting a free link.
Web Server
A web server is a computer that is set up to serve web pages to
clients (your visitors). In other words, it's the computer (including
it's web serving software, like Apache), that is hosted at your
ISP's location and handles all requests for your web site. See
Web Server (http://en.wikipedia.org/wiki/Web_server) on Wikipedia
for more detailed info.
Wiki
A Wiki is a web site software package that allows visitors to edit
the content of the pages. This site uses MediwWiki (http://www.mediawiki.org).
Another notable wiki is Wikipedia (http://www.wikipedia.org),
an online encyclopedia edited by the internet community. There
are many wiki packages available, including WordPress (http://www.wordpress.org).
XHTML
XHTML refers to eXtensible Hyper Text Markup Language, and is a
more strict form of it's predecessor, HTML. See XHTML on Wikipedia.org
(http://en.wikipedia.org/wiki/XHTML)
XML
XML is short for eXtensible Markup Language. In layman's terms,
XML is a language used to describe data or content. XML is advantageous
because it serves as a parsable structured form of markup, unlike
HTML. XML is only considered "valid" when the tags
are nested and closed properly. See XML on Wikipedia.org (http://en.wikipedia.org/wiki/XML).
|