Saturday, March 27, 2010

Search Engine Optimization Tool and Software

Since the majority of Web site travelers begin with one of the major search engines, visibility on these sites has become crucial for most businesses. But what can someone do to increase his or her rank for keywords and phrases relevant to the Web site's content? The theme you'll hear from most Web marketers these days is this: “Submitting is not enough.”

Why Use WebPosition?
WebPosition is a tool that enables Web site owners and managers to systematically improve their Web site's relevance for keyword and phrase queries in the major search engines. Traditionally, this has required long hours of trial and error, as well as poring over dozens of how-to guides and newsletters. WebPosition offers eight critical elements needed to create, improve, and maintain search engine rankings:

Page Builder: Creates an HTML page optimized for the selected engine and keyword. Templates can be created to customize the look and feel of the pages it builds.

Wordtracker Keywords: Wordtracker is a U.K. based company that WebPosition has partnered with to provide a suite of keyword research tools.

WebTrends Keywords: This feature allows WebPosition to gather information from WebTrends about which search engines and keywords the visitor used to find the URL or web page.

Page Critic: Supplies expert advice on how to optimize new or existing Web pages. Page Critic knows each search engine's unique “personality” and how to build TOP ranking pages for each without “spamming” or abusing the index.

Page Editor: Easily modify pages with the built-in HTML editor or use your own.

Submitter: Once pages are created or modified, they must be submitted. WebPosition offers advanced submission “safety” features not found elsewhere that insure users don't over submit to an engine or accidentally violate one of their ever-changing submission guidelines.

Reporter: Checks a site's positions in the major search engines to determine its rank in the TOP engines for each chosen search keyword or phrase.

Scheduler: Activates the Reporter daily, weekly, or monthly to automatically check a Web site's positions. You can also schedule submissions and even run Link Defender mission.

For detailed description of each part of WebPosition please refer to our other training manual on “Using WebPosition”.

Checking your listing in Search Engine

Some crawler-based search engines make it easy to confirm that your web page is in their index. With others, it can be more difficult. Below are the best ways to find your web pages in the major crawler-based search engines.

AltaVista

AltaVista has commands that can be used to easily narrow your search to a single URL or to pages within a particular web site. These commands can also be combined with query terms by those who wish to refine their search results.

URL Search
To find a single page listed in AltaVista's crawler-based index, you can use the "url:" command. Simply preface the URL you wish to locate with this command, such as:

url:http://searchenginewatch.com/webmasters/meta.html

If the URL is in the index, it will be displayed. You can also use this command to find pages within a particular section of a web site. For example, this:

url:http://searchenginewatch.com/webmasters/


Site Search
To locate all the URLs listed from a particular web site, use the "host:" command, such as:

host:searchenginewatch.com

Use only the actual domain name. Omit the http:// prefix. Also, be aware that using the www prefix can make a difference.

AllTheWeb.com/FAST Search

At FAST Search, commands can be used to find a single URL or multiple web pages from a particular site, as explained below:

URL Search
To find a single page listed in FAST's crawler-based index, you can use the "url.all:" command:

url.all:searchenginewatch.com/webmasters/meta.html

This command will also work to bring up a single URL that is listed in the FAST-powered results used by Lycos.

Site Search
To locate all the URLs listed from a particular web site, use the "url.host:" command, such as

url.host:searchenginewatch.com

Use only the actual domain name. Omit the http:// prefix. Also, be aware that using the www prefix can make a difference, as described with AltaVista.

Google

At Google, commands can be used to find a single URL or multiple web pages from a particular site, as explained below:

URL Search
To find a single page listed in Google's crawler-based index, you can use the "allinurl:" command, such as:

allinurl:searchenginewatch.com/webmasters/meta.html

The allinurl command works the same as with AltaVista, which means you can also use it to find pages within a particular section of a web site. Be sure to omit the http:// prefix.

Please note that if you are trying to find web pages with both words in the URL and in the document itself, you'll need to use the special "inurl" command.

Site Search
To locate all the URLs listed from a particular web site, use the "site:" command in combination with a word or words that you know appear on all the pages. For example:

site:searchenginewatch.com searchenginewatch
Inktomi
Inktomi powers some of the results used by a variety of different search engines. Below is how to locate a single or multiple URLs within Inktomi powered-listings.

URL Search
To find a particular URL listed in Inktomi's crawler-based index, you can use the "originurl:" command. Simply preface the URL you wish to locate with this command, such as:

originurl:http://searchenginewatch.com/webmasters/meta.html

If the URL is in the index, it will be displayed. This command has been tested to work on the following Inktomi-powered services:

• AOL Search
• GoTo
• HotBot

The originurl command will bring up an individual URL listed in the Inktomi-powered results of these services. It does not work at iWon, LookSmart or MSN Search.

Keep in mind that not all Inktomi partners tap into the entire Inktomi database. That's one reason why you may find a URL at one service but not at another.

Site Search

To locate all the URLs listed from a particular web site, use the "domain:" command, such as:

domain:searchenginewatch.com

Use only the actual domain name. Omit the http:// prefix. As explained above for AltaVista, using the www prefix can also make a difference.
Checking your listing in Directories: Yahoo, LookSmart & the Open Directory
Directories are search engines that are powered by human beings, rather than by crawling the web. Because humans are involved, directories tend to list only a few pages per web site. This means that you probably won't need to make use of special site or URL commands to locate your listings. In fact, of the three major directories, only Yahoo has any specific command like this. At Yahoo, you can use the "u:" command to locate specific URLs, like this:

u:searchenginewatch.com

That would bring up any pages from Yahoo's human-compiled listings that contain "searchenginewatch.com" within the URL

At the web's two other major directories, LookSmart and the Open Directory, you'll find that searching for your domain or a portion of your domain should bring up many or all of your listings.

For example, by entering "searchenginewatch.com" or "searchenginewatch," I would be able to find most of my human-compiled listings in both places.

LookSmart also provides a detailed guide to locating your URL within its service and the listings it provides to partners:

http://submit.looksmart.com/info.jhtml?page=find

Search Engine Submission

This section of training manual covers search engine submission, placement and marketing issues. It explains how search engines find and rank web pages, with an emphasis on what webmasters can do to improve their search engine rankings by properly submitting.

How can I get my site listed with search engines? It sounds like a pretty simple question, but sadly, search engine submission can be a complicated subject.

Have no fear. This section will take you through the essential and relatively easy steps you can take to get listed with search engines.
What is Search Engine Submission?
“Search engine submission” refers to the act of getting your web site listed with search engines. Another term for this is search engine registration.

Getting listed does not mean that you will necessarily rank well for particular terms, however. It simply means that the search engine knows your pages exist.

Think of it like a lottery. Search engine submission is akin to you purchasing a lottery ticket. Having a ticket doesn't mean that you will win, but you must have a ticket to have any chance at all.

Submitting to Directories
Directories are search engines powered by human beings. Human editors compile all the listings that directories have. Getting listed with the web's key directories is very important, because their listings are seen by many people. In addition, if you are listed with them, then crawler-based search engines are more likely to find your site and add it to their listings for free.

Benefits of submitting to Directories
One great way to increase your traffic and weight in the search engines is to submit to directories.

Submitting to directories is certainly a good plan if you are interested in increasing the number of links to your web page. As you may already know, the more links you have the higher search engines will rank your web page based on its popularity. This is one of the greatest benefits of submitting to directories because your site will receive many back links and carry a higher weight with the search engines. As a result, your site will be ranked higher by the search engines and have more links for people to click on and end up at your site.

Another benefit your site will enjoy by being listed in a directory is increased credibility. The reason being part of directory increase credibility is because an actual person must review your site for quality content before it is listed in a directory. Because of this, when your site is listed in a directory web searcher immediately know your website provides relevant information and will be more likely to pay you a visit.

Both of these benefits of submitting to directories result in your ultimate goal, which is increasing traffic.

Preparation
You should prepare before submitting to any directory. This preparation means that you have written a 25 word or less description of your entire web site. That description should make use of the two or three key terms that you hope to be found for.

Submitting to Yahoo
Do a search on Yahoo, and the main results that come back are “powered” by Yahoo's crawler. Despite this, Yahoo maintains its own independent “directory” of web sites, which are compiled by its human editors.

Yahoo has two submission options: “Standard” which is free, and “Yahoo Express” which involves a submission fee.

Anyone can use Standard submission to submit for free to a non-commercial category. You'll know the category is non-commercial because if you try to submit to a non-commercial category, the Standard submission option will be offered in addition to the Yahoo Express paid option, discussed further below.

Why might you choose to pay when the free search engine submission option is available? Simply for a fast turn around time. If you use the free submit choice, there's no guarantee that your submission will be reviewed quickly or at all.

Your submission to a non-commercial category is more likely to be accepted if your content is not overtly commercial. For example, submitting the home page of a site that sells running shoes is likely to be seen a commercial and not accepted. However, if you have a page within that web site that discusses in depth how to select the right type of shoes for different running races, then that page might be deemed helpful, non-commercial information and accepted.

As for commercial categories, Yahoo requires that sites pay a Yahoo Express submission fee of $300 per year. This fee doesn't guarantee that you will be listed, only that you'll get a yes or no answer about being accepted within seven business days. However, the vast majority of most decent sites are accepted.

If accepted, you'll be reevaluated after a year and charged the submission fee again, if you want to stay in Yahoo's commercial area. You should review the traffic you received from Yahoo over the past year, to decide if it is worth paying the fee again. If not, you can decline to be listed, and you will not be charged.

How do you submit? If you are submitting for free to a non-commercial category, click on the "Suggest a Site" link that appears at the top right-hand corner of category page. That will bring up a submission form. Fill it out, and you're done.

If you are paying to submit, you needn't pick a category. Instead, just use the Yahoo Express Submission Form. From there, Yahoo editors will choose a category for you. All you need to do is fill out the form that's presented.

The above tips are the bare essentials to getting listed with Yahoo. If you are in a hurry, you can follow them, and you'll probably get listed and receive some traffic from the service. However, you may want to do even more preparation before submitting to this important service.

Submitting to the Open Directory
The Open Directory is a volunteer-built guide to the web. It is provided as an option at many major search engines, including Google. Given this, being listed with the Open Directory is essential to any site owner.

The good news is that submission is absolutely free. The bad news is that this means there's no guaranteed turnaround time to getting a yes or no answer about whether you've been accepted.

To submit, locate the category you want to be listed in. Then use the “add URL” link that appears at the top of the category page. Fill out the form, and that's it -- you've submitted.

If you are accepted, you should see your site appear within about three weeks. If this doesn't happen, then you should resubmit.

Submitting to Crawlers

Crawler-based search engines automatically visit web pages to compile their listings. This means that, unlike directories, you are likely to have several if not many pages listed with them. This also means that by taking care in how you build your pages, you might rank well in crawler-produced results.

Submitting to Google
One of the most important crawler-based search engines is Google, because many people search at it, plus it "powers" the main results of several other services.

The absolute best way to get listed with Google is to build links to your web site. Indeed, this is the best way to get listed for free with all the major crawlers listed on this page. Crawlers follow links, so if you have good links pointing at your web site, the crawlers are more likely to find and include your pages.

Here's the good news: if you submitted your site to the major directories and got listed with one of them, then Google and other crawlers will almost certainly pick up the URL that was listed. This means you may not need to do additional work to get listed with crawlers.

Google provides an Add URL page that lets you submit a URL directly to its crawler. There's no guarantee that Google will actually include a URL submitted to it this way, however. Despite this, it makes sense to submit your home page and perhaps one or two other URLs from "inside" your web site via the Add URL page.

You really don't need to submit more than this. The only reason for submitting some of your inside pages is in case there is a problem reaching your home page. This gives Google an alternate route into your site. From whatever page it visits, it will look for links to other pages that you have and perhaps include those. This is true for other crawlers, as well.

If you have a brand new web site, it will probably take about a month before Google lists your web pages. Because of this, you might consider making use of its paid placement program.

Submitting to Yahoo
Yahoo is an important crawler-based search engine because many people use the Yahoo site and it provides main results of several other services.

As covered with Google, building links is the best way to get listed for free. Yahoo also offers a free URL submission form. Submit according to the same instructions as for Google, above.

What if you aren't picked up for free? Yahoo has paid inclusion programs that guarantee to add the pages you submit quickly. The downside to these programs is that you'll be charged every time someone clicks on your listing. If you run out of money, potentially, your listing may be dropped. However, there's still a chance that even if you run out of money, you might continue to be listed for free.

By the way, Yahoo's crawler incorporates technology from three different crawlers that it purchased in 2002 and 2003: Inktomi, AltaVista and FAST's AllTheWeb. Any references you hear about those crawlers are now superceded by the single Yahoo crawler that operates.

Submitting to Teoma
Teoma is an important crawler-based search engine because it powers the main of the results that appear at the popular Ask Jeeves web site. In fact, Ask Jeeves owns Teoma.

Teoma has no free Add URL page. This doesn't mean that you can't get listed, however. Teoma crawls the web, so if you have links pointing at your web site, you may get included naturally.

Submitting to MSN
MSN Search is an important crawler-based search engine used by many people. It is currently powered by Yahoo's crawler-based results.

Submitting Via Paid Placement Listings

Every major search engine with significant traffic accepts paid listings. This unique form of search engine advertising means that you can be guaranteed to appear in the top results for the terms you are interested in within a day or less. Given this, paid listings are an option that should be explored by site owners who wish to quickly build visibility. They may also be a long-term advertising option for some.

Overture
Overture allows sites to “bid” on the terms they wish to appear for. You agree to pay a certain amount each time someone clicks on your listing. This is why it is sometimes called “pay-per-click” (PPC) or a “cost-per-click” (CPC) search engine.

For instance, let's say you wanted to appear in the top listings for "running shoes." You might agree to pay 25 cents per click. If no one agrees to pay more than this, then you would be in the number one spot. If someone else later decides to pay 26 cents, then you slip into the number two position. You could then bid 27 cents and move back on top, if you wanted to.

While some people go directly to the Overture web site to search, most people encounter Overture's paid listings via other search engines. For example, the very top listings for "running shoes" at Overture would also appear in the sponsored areas of other sites.

If your goal is to build visibility on search engines quickly, then Overture is an essential option for you to explore. It can put you in the top results of many major search engines in a short period of time.

I think it is well worth it for anyone to open an Overture precision match account and experiment with how paid listings may help them. An account requires a $50 minimum deposit, and you must spend at least $20 per month. By carefully selecting targeted terms, you can stretch that money out for one or two months and get quality traffic.

When your initial deposit has expired, you may find that the editorial or "free" listings generated by your submissions to directories and crawlers have kicked in. This may mean that you can eliminate your ad spend with Overture entirely. On the other hand, you may find that you want to continue spending and perhaps even increase your budget, to target terms where you don't receive good editorial placement.

By the way, Overture was formerly known as GoTo. It changed its name in mid-Oct 2001. The company was also purchased by Yahoo in 2003.

Google AdWords
Google sells paid listings that appear above and to the right-hand side of its regular results called through a program called Google AdWords. Since it may take time for a new site to appear within Google, these advertising opportunities offer a fast way to get listed with the service. Also, as with Overture, they may be a continuing option you may wish to explore.

Google's self-service AdWords program charges a per click fee, similar to Overture. AdWords charges a $5 activation fee, and $25 ought to last you about a month, if you've carefully selected your terms.

Google also distributes its ads to other partners, which provide you with exposure to more potential traffic.

Things to Avoid

1. Strange color combinations that make it hard to read for a viewer.
2. Text that is too small.
3. Unrelated advertisements-that will provide distraction to viewers.
4. Hidden text (Normally text that matches the color of your page background)
5. Instances of 5 or 6 keywords being repeated in a row
6. Submitting your website to FFA (Free For All) sites.
7. Duplicate pages, doorway pages and cloaked page.
8. Re-direction from the home page to another.
9. Automated submission to search engines.
10. Submitting to search engines more than ONCE a month!
11. Keyword stuffing in comments tags, Meta keyword tags and image alt tags.
12. Participating in link farms. (A link farm consists of sites that link to other sites for the sole purpose of increasing their link popularity score.)
13. Use of <> tag in HTML page header.
14. Using trademarks or company names belonging to others in your Meta tags.

Some Tips about SEO

1. Make sure your Website is completely finished; spell checked, online and all the links are working properly before submitting to search engines.
2. Try to include ‘robots.txt’ to prevent spiders from crawling certain directories. (For more in-depth study of robots please visit http://www.robotstxt.org)
3. If a site was developed in a language other than HTML, make sure that there’s AT LEAST SOME static content for the spider to read.
4. Try and get on Dmoz if you are not already on there.
5. Read each search engines guidelines and follow them strictly.
6. Use the Meta tags in all pages.
7. For getting proper keywords that commonly used, visit:
a) http://www.wordtracker.com
b) http://inventory.goto.com/d/searchinventory/suggestion/
c) https://adwords.google.com/select/main?cmd=KeywordSandbox
8. For checking keyword density analysis visit:
a) http://www.keyworddensity.com/
b) http://www.ranks.nl/
9. If you are using frames, try to include tag with keyword rich content in the main page (the page that contains tags).
10. Keep all your content in the root directory if possible.
11. Don’t use images for links to important content pages. If you do, add ALT tags.
12. Do not try to cover too many topics on one page, as it will dilute the relevancy for your targeted key phrase.
13. Try to include your company’s physical address on your site, including a telephone number.
14. Make sure your home page should load in 8-10 seconds or less with a 56K modem.
15. Optimize your images and keep the page size low.
16. Try to use CSS data as an external file and call it by using tag.
17. Use the longer or plural version of a keyword, where possible.
18. Try to register your domain name with the exact keyword phrase you are targeting, using hyphens to separate the keywords. (e.g. if the keyword phrase is “star hotels Kochi”, then your domain: www.star-hotels-kochi.com)
19. Update your important pages at regular intervals.
20. Check your site from broken links periodically.

Glossary

A

Affiliate
A related site often linked to your own, to form a "partnership" in the broadest sense.

Affiliate Linking
The provision of reciprocal links between affiliates.

Algorithms
The rules by which search engines rank sites.

Acquisition
The point of acquiring a client, i.e. where a site fulfils its role. Usually a sale, but sometimes a subscription or similar will count.

Automated Spider Engines
Engines which deploy an automated programme to visit and retrieve data from your site.

B

Bid Capping
The technique of setting a limit for maximum bid for use by a bid-listing monitoring service, such as that offered by BMM.

Bidding Engine
Search engines that operate a bid-listing model, such as Overture of Espotting.

Bid Listing
The use of PPC models by search engines in a dynamic, real time auction for search listings. As the auction is never-ending and real time, your listings will change as the bidding level changes. Bids drop of as budgets run dry, or are cancelled, and new bids are added. This makes monitoring and changing of bids essential.

Bid-Listing Monitoring
The act of monitoring activity in bid engines in order to keep a bid listing campaign as competitive as possible.

Bookmarking
The action of marking a webpage in your browser, to make it easy to return to later. Most statistics packages will measure this a new visitor, even though a loyalty relationship has been established.

Brand Infringement
The act of using another company's brand in a non-ethical way, such as inclusion in meta tags, claiming to be that company, or making false statements about that company.


Brand Intelligence
Bigmouthmedia's brand infringement detection and investigation service.

C

Cascading Style Sheets (CSS)
Files that instruct browsers on how to format a document (which fonts to use, how links should behave etc.) Some browsers treat CSS in different ways.

Click
Term used to describe when a user selects a link or search engine listing by pressing the mouse button while holding the pointer over the link

Client-Side
Web-coding that instructs the browser to undertake a task. Opposite to server-side

Cloaking
See IP Delivery

CPA
Cost per Acquisition

CPC
Cost per Click - The actual average cost incurred by the advertiser by the action of a potential customer following a link found on, for instance, a SERP.

CPM
Cost Per Impression (1,000 page views)

Creatives
Titles and Descriptions as they are submitted to directory editors

Crawler-based search engines
Engines that use automated software to index the billions of files online.

Character set
An encoding scheme in which each character is represented by a different binary value. For example, ISO8859-1 is an extended Latin character set that supports more than 40 Western European languages.

D

Data gathering
The process of building profiles of customers by collecting data by their activity. Knowing your customers better will allow you to provide better products and customer service.

Database Presence
The inclusion of database driven content within a site

Design Technology
Technology used in the design process

Directory Creatives
Text written by BMM and submitted to editors

Directory Search
A search made by clicking through directory categories without use of keywords or phrases

Domain
All devices connected to the internet are referenced by their IP address. To make using the internet easier, most will have IP addresses have names associated with them - for instance .com. a domain can have any number of sub-domains prefixed before it, to create a complete domain name

Domain Mapping
The server administration required to 'point' a domain at a specific location or "IP Address"

Data processing
Operations performed on data to provide useful information to users.

E

E-Commerce
The processes of selling online, via a website

Editorial Search Engine
Engines that rank sites using human editors and not by reading meta tags

Emergency Alerts
To signal that an event that is being monitored has occurred, for instance, by email or SMS.

F

Featured Site
The term used by many portals and other search properties that incorporate some element of bid-listing results in their normal search results.

Flash
Macromedia Flash Technology

G

Generic Keywords
General terms relating to subject matter, e.g. marketing (generic) search engine optimization (specific)

Google PageRank
Google's own system for ranking web pages

Googlebot
The agent name of Google’s search engine spider which crawls the web to create its searchable index.

H

Hit
An often mis-used word that refers to any file download from a website, including one hit for the HTML page, and one for each embedded file such as graphics, Flash movies, WAV files etc. Consequently, one page view can generate several hits. Often, the word hit is mis-used where the phrase "visitor session" would be more appropriate.

HTML Coding
Hyper Text Markup Language is a coding Language used to make HyperText documents for use on the Web.

Hyperlink
A link in a document to information within that document or another document. These links are usually represented by highlighted words or images. When a reader selects a hyperlink, the computer display switches to the document or portion of the document referenced by the hyperlink.


I

IP Address
A four-byte numeral (bytes have a value between 0 and 255), which represents an exact address of an internet location, e.g. 255.0.192.47. There are almost 4,300 million unique IP addresses, however this is currently not enough for global use, and often users share IP addresses through a proxy server.

IP Delivery (Cloaking)
A technique whereby false content is presented to search engine spiders in an attempt to gain ranking points. Search engine spiders are recognized by their IP Address.

IT
Information Technology

J

JavaScript
A web coding language with different capabilities to HTML. Not recognized by Search Engine Spiders. Can be client-side, and sometimes server-side.

JHTML
Normal HTML that includes "server-side" JavaScript instructions.

K

Keyword Search
A search made by keying a keyword, or combination of words into a search box on a search engine, directory or portal.

Key Phrases
Combinations of keywords, also called "search terms"

Keyword Search Frequency
Number of search requests for particular keyword

Keywords
The actual words used to describe the site in meta tags and creatives and to find the site using "searches"

Keyword Density
The ratio of the number of occurrences of a particular keyword or phrase to the total number of words in a page. One element of search engine optimization.

Keyword Proximity
Keyword proximity measures the closeness between two keywords.

L

Links
An electronic connection between two Web sites (also called "hyper-link").

Link Farm
Web pages deliberately created to increase the number of links between sites and therefore link popularity. A dangerous technique, unpopular with the search engines.

Link Popularity
A method used by search engines to determine the importance of a listed site. Based on the idea that sites with many inbound links is more credible than one without


Logarithmic
A mathematical term for the ratio of values expressed by the base 10 or e. If the base is 10, the logarithm is called common. If the base is e, the logarithm is called natural.

M

Message Boards
Websites that allow visitors to post messages, and for others to reply to them. Unlike email, all discussions take place in public, any anyone can join in. Usually, such discussions are themed by subject matter.

Meta Search Engine
A search Engine that simultaneously refers to several other search properties to retrieve search results. Copernic and Vivisimo are examples.

Meta Tag
HTML coding embedded in the site, to provide spiders with keyword information

N

Newsletter Marketing
The process of building relationships with existing customers and gaining new ones by publishing and mass distributing an email containing features and articles of interest to them.

O

Obfuscation
Using IP Delivery with intent to provide excellent SEO-content to search engine spiders, and deliberately poor SEO-content to users so that any content theft by competitors results in poor search engine listings. Uses IP delivery.

P

Paid Linking
The act of paying for another site to link to your own.

Page View (Impression)
One view of a web page (or banner ad) by a user.

Partner Sites (Bid Engines Terminology)
Relates to bid engines. Partner sites are sites that also display some or all of the bid listing results from that bid engine.

Pay Per Click
A charging model for search engine listings based on a set charge for users clicking on that search engine listing. This model is used by Overture and Espotting in the UK, in a bidding fashion (see bid listing)

Permission Based Email
Since late 2003, to email individuals in Europe as part of a mass distribution, the express, unequivocal permission of the recipient must be obtained first, and this should form basis of an ethical, permission based campaign.

Portals
Web sites which offer some or all of the following: search, email, news, weather, shopping

PPC
See Pay per Click

PR
Public Relations - the process of maintaining and controlling your relationship with the public and your public persona.

Proxy Server
An internet server that allows several users to share one internet connection, reducing the number of IP addresses required. Users are assigned IP Addresses from a limited range, as and when they are required.

Parallel processing
When an array of processors or segments of the CPU work at the same time to speed processing or multi-task.

Q

R

Rank
The position attained on the Search Engines, Directories and Searchable Portals

Rebrand
The process of changing the form of an established brand and managing that change effectively to avoid losing brand recognition.

Reciprocal Linking
The act of two sites linking to each other, for mutual benefit, and with no cost incurred by either site.

Robots
Programme which meta search engines send out to read the metas and/or body html of a submitted site

Robots Exclusion Protocol (REP)
Text file placed under e.g. www.example.com/robots.txt. Used to prevent spider trawls of private or sensitive areas and image folders/stats files etc

ROI
Return On Investment - A measure of the success of a marketing campaign in comparison to the money spent on that campaign.

S

Search Engine
A website that provides a list of useful links (SERP's) in response to a text query. Many Search engines have gradually evolved from pure search engines, such as Google, into portals, such as Yahoo!

Search Engine Optimization
A crucial element of search engine marketing - the process of adapting web pages to maximize their effectiveness at matching (and occurring highly in the results pages for) common search phrases on search engines.

Search Engine Log Data
Actual search engine logs, from which BMM retrieve data on search frequency of particular keywords

Search Engine Results Page
The page on which search results are displayed in response to a query submitted by the user

Search Term Evolution
New words or phrases which are relevant to your industry

SEO
Search Engine Optimization - the process of maximizing search property driven traffic to a site by analytical means

SERP's
Search engine results pages

Server-Side
Web coding that instructs a server to undertake a task. Opposite to client-side

SHTML
Normal HTML that includes "server-side" instructions

Slang
The use of alternative or colloquial phrases. In terms of SEM, slang is important to take into consideration when formulating an optimization strategy.

Spam/Spammed
General term relating to practices not approved by the engines and editors

Spiders
See Robots

Style Sheet Errors
Incompatibilities between Cascading Style Sheets designed for one browser, when viewed through another.

Submission
The act of making search engines aware of new web pages and sites. Each search engine will have it's own individual process for doing this.

Server
A process that runs on a host that relays information to a client upon the client sending it a request

T

Top Listings
Listings which rank in the top 30 of results

Targeted Traffic
The concept of directing traffic to a website based on the requirements of that traffic (i.e. matching user "wants" with site provisions.

Traffic Intelligence
Bigmouth media's web-traffic stats analysis service.

U

Unique User
One individual user to a site. This user may visit once or return often, but will still count as one unique user.

URL
Universal Resource Locator (address e.g. http://www.google.com)

User Agent Delivery
Similar to IP Delivery, except that Search Engine Spiders are recognized by their name (user agent) rather than by their IP address. This technique is spam.

V

Viral Marketing
A form of marketing that is self sustaining and self promoting. Usually, the core idea is so appealing that the public take on the role of 'spreading the message' themselves. In its favor is that it is incredibly effective, but control - and even 'ownership' - of the campaign is relinquished.


Visitor Session
A full and complete visit by a user to a website from start to finish of that visit. This is usually considered complete if a user is inactive for a set length of time, most commonly 30 minutes.

W

Web Trawler
See Robots

Web Crawling
A web crawler (also known as a web spider or ant) is a program which browses the World Wide Web in a methodical, automated manner. Web crawlers are mainly used to create a copy of all the visited pages for later processing by a search engine that will index the downloaded pages to provide fast searches.

W3C
World Wide Web Consortium, the governing body for web standards.

Webmaster
The person responsible for maintaining and updating a Web site

X

XML Trusted Feed Campaign Management
Instead of relying on spiders (robots) to create an index of your website, you can use a XML feed to take control of the process yourself by submitting a feed of your content in a particular format (XML) to participating search engines. It is particularly useful to web-sites whose content changes very quickly of cannot be accessed with conventional spidering technology.

Y

Z

Friday, March 26, 2010

What is Meta tag?

A meta tag is a special HTML tag that provides information about a Web page. Unlike normal HTML tags, meta tags do not affect how the page is displayed. Instead, they provide information such as who created the page, how often it is updated, what the page is about, and which keywords represent the page's content. Many search engines use this information when building their indices.

5 Basic Rules of Search Engine Optimization

Search engine optimization is crucial for anyone who wants people to visit his or her Web site. You can place as many ads as you like, but most people are still going to find your site because of its listings in search engines or directories.

It's a fact that most people who use search engines only look at the first one or two page of search listings. The goal of effective search engine optimization is to get your pages listed on those critical first pages for particular key terms.

1) Remember that each page of your site is a separate entity.
You need to apply the basics of effective search engine optimization to each individual page.

2) Choose appropriate key words or phrases for each page.
Phrasing matters. Many more people search for the term “effective search engine optimization” than for “effectively optimizing for search engines”. To find out which key words or phrases are more popular than others, you can use a tool such as Overture's and Wordtracker’s Search Term Suggestion Tool

3) Give each page an appropriate title that includes the key word or phrase at least once.
We often see sites that use the name of their business as the title of all their pages. Is every page of their site about their business? Probably. But chances are really low that people will be searching for their business’ name!

4) Put the key words or phrase that you've chosen in the page's title tag, meta keywords, and meta description.
Make sure that the meta description is as appealing as possible, because some search engines actually use this description in the search engine results pages that people will be reading.

5) Be sure your chosen key words or phrase is repeated judiciously throughout the content of the page.

You don't want to overdo it, or your page may be rejected as spam, but you need to repeat it enough times that the search engine's software will consider the phrase relevant.

Following are the main areas of web page that search engines give more importance in their ranking algorithms:

Title tag, The main body text, Meta tags, Link popularity, Domain name, Heading tags, Proximity of Keywords, Bold or Italic texts, Folder or file names, Image alt tags, Title attribute and keyword in the beginning of the sentence.

Based on the importance, we can rank those areas as below:

Title

2.0

Link popularity

2.0

The main body text

1.5

Domain name

1.0

Keyword prominence

1.0

Heading tags

0.5

Proximity of keywords

0.5

Bold or Italic

0.4

Folder or file name

0.3

Meta description

0.3

Alt tag

0.2

Title attribute

0.2

Meta keywords

0.1

Total Score

10


Benefits of Search Engine Optimization

Search engines generate nearly 90% of Internet traffic and are responsible for 55% of e-commerce transactions. Search Engine Promotion has shown to deliver the highest ROI, compared to any other type of marketing, both online and offline. Search engines bring motivated buyers to you and hence contribute to increased sales conversions.
Search Engine Optimization offers an affordable entry point for marketing your website and an effective way to promote your business online. SEO makes for a long-term solution, is your access to sustained free traffic and a source of building brand name and company reputation.

Why Search Engine Optimization?

Search engine optimization is the process of increasing the amount of visitors to a Web site by ranking high in the search results of a search engine. The higher a Web site ranks in the results of a search, the greater the chance that site will be visited by a user. It is common practice for Internet users to not click through pages and pages of search results, so where a site ranks in a search is essential for directing more traffic
toward the site.

So search engine optimization focuses on techniques such as making sure that each web page has appropriate title tags and meta tags, and that the keyword or key phrases for the page are distributed throughout the content in a way that the particular search engine will like.

What is Search engine optimization (SEO)?

Search engine optimization (SEO) is the process of improving the volume or quality of traffic to a web site or a web page (such as a blog) from search engines via "natural" or un-paid ("organic" or "algorithmic") search results as opposed to search engine marketing (SEM) which deals with paid inclusion. The theory is that the earlier (or higher) a site appears in the search results list, the more visitors it will receive from the search engine. SEO may target different kinds of search, including image search, local search, video search and industry-specific vertical search engines. This gives a web site web presence.

As an Internet marketing strategy, SEO considers how search engines work and what people search for. Optimizing a website primarily involves editing its content and HTML and associated coding to both increase its relevance to specific keywords and to remove barriers to the indexing activities of search engines.

The acronym "SEO" can refer to "search engine optimizers," a term adopted by an industry of consultants who carry out optimization projects on behalf of clients, and by employees who perform SEO services in-house. Search engine optimizers may offer SEO as a stand-alone service or as a part of a broader marketing campaign. Because effective SEO may require changes to the HTML source code of a site, SEO tactics may be incorporated into web site development and design. The term "search engine friendly" may be used to describe web site designs, menus, content management systems, images, videos, shopping carts, and other elements that have been optimized for the purpose of search engine exposure.

Another class of techniques, known as black hat SEO or spamdexing, use methods such as link farms, keyword stuffing and article spinning that degrade both the relevance of search results and the user-experience of search engines. Search engines look for sites that employ these techniques in order to remove them from their indices.

Relationship between Search Engine Ranking and PageRank

Relationship between Search Engine Ranking and PageRank

While the exact algorithm of each search engine is a closely guarded secret, search engine analysts believe that the search engine results (ranking) is some form of a multiplier factor of ‘Page Relevance’ and ‘PageRank’. Simply put, the formula would look something like:

PR(A) = (1-d) + d(PR(t1)/C(t1) + ... + PR(tn)/C(tn))

That's the equation that calculates a page's PageRank. It's the original one that was published when PageRank was being developed, and it is probable that Google uses a variation of it but they aren't telling us what it is. It doesn't matter though, as this equation is good enough.

In the equation 't1 - tn' are pages linking to page A, ‘C’ is the number of outbound links that a page has and ‘d’ is a damping factor, usually set to 0.85.

We can think of it in a simpler way:-

A page's PageRank = 0.15 + 0.85 * (a “share” of the PageRank of every page that links to it) “share” = the linking page's PageRank divided by the number of outbound links on the page.

A page “votes” an amount of PageRank onto each page that it links to. The amount of PageRank that it has to vote with is a little less than its own PageRank value (its own value * 0.85). This value is shared equally between all the pages that it links to.

From this, we could conclude that a link from a page with PR4 and 5 outbound links are worth more than a link from a page with PR8 and 100 outbound links. The PageRank of a page that links to yours is important but the number of links on that page is also important. The more links there are on a page, the less PageRank value your page will receive from it.

If the PageRank value differences between PR1, PR2 ...PR10 were equal then that conclusion would hold up, but many people believe that the values between PR1 and PR10 (the maximum) are set on a logarithmic scale, and there is very good reason for believing it. Nobody outside Google knows for sure one way or the other, but the chances are high that the scale is logarithmic, or similar.

Whichever scale Google uses, we can be sure of one thing. A link from another site increases our site's PageRank. Just remember to avoid links from link farms.

PageRank in Google's own Words

PageRank in Google's own Words

Google explains PageRank as follows:

PageRank relies on the uniquely democratic nature of the web by using its vast link structure as an
indicator of an individual page's value. In essence, Google interprets a link from page A to page B as a vote, by page A, for page B. But, Google looks at more than the sheer volume of votes, or links a page receives; it also analyzes the page that casts the vote. Votes cast by pages that are themselves "important" weigh more heavily and help to make other pages "important."

Important, high-quality sites receive a higher PageRank, which Google remembers each time it conducts a search. Of course, important pages mean nothing to you if they don't match your query. So, Google combines PageRank with sophisticated text-matching techniques to find pages that are both important and relevant to your search. Google goes far beyond the number of times a term appears on a page and examines all aspects of the page's content (and the content of the pages linking to it) to determine if it's a good match for your query.

Google PageRank

Emergence of Google PageRank

Google realized the problem conventional search engines faced in dealing with this situation. If the control of relevance remained with the webmasters, the ranking results would remain contaminated with sites artificially inflating their keyword relevance.

Web, by its very nature is based on hyperlinks, where sites link to other prominent sites. If you take the logic that you would tend to link to sites that you consider important, in essence, you are casting a vote in favor of the sites that you link to. When hundreds or thousands of sites link to a site, it is logical to assume that such a site would be good and important.

Taking this logic further the Google founders, Sergey Brin and Larry Page formulated a Search Engine algorithm that shifted the ranking weight to off-page factors. They evolved a formula called PageRank (named after its founder Larry Page) where the algorithm would count the number of sites that link to a page and assign it an importance score on a scale of 1-10. More the number of sites that link to a page, higher its PageRank.

The Google Toolbar

You can download Google Toolbar (free) and install it in your Internet Explorer within minutes. Amongst other useful features, it displays the PageRank of each web page you visit.

The Google toolbar appears just below your Internet Explorer browser and can be used for making a search on the web from any page. Google toolbar displays the PageRank of each web page on a scale of 1-10. If you have the Google toolbar installed in your browser, you would be used to seeing each page's PageRank as you browse the web. Google does not display the PageRank of web pages that it has not indexed. Please note that the Toolbar displays the PageRank of individual pages and not the site as a whole.

PageRank Means

What is PageRank?

PageRank is a unique algorithm developed by Google founders Larry Page and Sergey Brin at Stanford University and determines the importance of a web page measuring page importance on a scale from 0 - 10, where 10 is the highest. The main factor behind the PageRank algorithm is link popularity. If one site links to another site, then Google interprets this link as a vote, the more votes cast, obviously the more important the page must be. ...

From here on in, we'll occasionally refer to PageRank as “PR”.

Note:
Not all links are counted by Google. For instance, they filter out links from known link farms. Some links can cause a site to be penalized by Google. They rightly figure that webmasters cannot control which sites link to their sites, but they can control which sites they link out to. For this reason, links into a site cannot harm the site, but links from a site can be harmful if they link to penalized sites. So be careful which sites you link to. If a site has PR0, it is usually a penalty, and it would be unwise to link to it.

History of Site Ranking

History of Site Ranking

In the early 1990's when the web was emerging, several sites having industry specific content were being added to the web each day. Web surfers, on the other hand, had very few tools to locate such sites, which they believed were out there but did not have a clue about their domain names or web addresses. With the birth of Yahoo in 1993, surfers were offered some relief. Yahoo classified each site it discovered in a neatly organized directory list and also embedded a search engine in its site to search for sites based on 'keywords' existing in its database. Several other search engines like AltaVista, Excite, and Lycos etc. followed the search trends offering site search facilities to users. Most of these search engines relied heavily on Meta Tags to classify the relevance of websites based on the keywords they found in the tags.

Things seemed to work out fine before site owners and webmasters realized the value of how they can 'embed' industry specific keyword phrases in their Meta Tags and other site code, thus manipulating their way to show up higher in search results. Over a period of time, search engine results started getting cluttered with sites that spammed their content with relevant keywords but had poor site content for the visitor. The very essence, credibility and importance of search engines was now being challenged to deal with how they could offer a more refined search output to their users.

Google Guide

What is Google?

“Googol” is the mathematical term for a 1 followed by 100 zeros. The term was coined by Milton Sirotta, nephew of American mathematician Edward Kasner, and was popularized in the book, “Mathematics and the Imagination” by Kasner and James Newman. Google's play on the term reflects the company's mission to organize the immense amount of information available on the web.

Google Technology

Google.com began as an academic search engine. In the paper that describes how the system was built, Sergey Brin and Lawrence Page give an example of how quickly their spiders can work. They built their initial system to use multiple spiders, usually three at one time. Each spider could keep about 300 connections to Web pages open at a time. At its peak performance, using four spiders, their system could crawl over 100 pages per second, generating around 600 kilobytes of data each second.

Google runs on a distributed network of thousands of low-cost computers and can therefore carry out fast parallel processing. Parallel processing is a method of computation in which many calculations can be performed simultaneously, significantly speeding up data processing. Google has three distinct parts:

* Googlebot, a web crawler that finds and fetches web pages.
* The indexer that sorts every word on every page and stores the resulting index of words in a huge database.
* The query processor, which compares your search query to the index and recommends the documents that it considers most relevant.

Let's take a closer look at each part.

Googlebot, Google's web Crawler

Googlebot is Google's web crawling robot, which finds and retrieves pages on the web and hands them off to the Google indexer. It's easy to imagine Googlebot as a little spider scurrying across the strands of cyberspace, but in reality Googlebot doesn't traverse the web at all. It functions much like your web browser, by sending a request to a web server for a web page, downloading the entire page, and then handing it off to Google's indexer.

Googlebot consists of many computers requesting and fetching pages much more quickly than you can with your web browser. In fact, Googlebot can request thousands of different pages simultaneously. To avoid overwhelming web servers, or crowding out requests from human users, Googlebot deliberately makes requests of each individual web server more slowly than it's capable of doing.

Googlebot finds pages in two ways: through an add URL form, www.google.com/addurl.html, and through finding links by crawling the web.

allows rapid access to documents that contain user query terms.

To improve search performance, Google ignores (doesn't index) common words called stop words (such as the, is, on, or, of, how, why, as well as certain single digits and single letters). Stop words are so common that they do little to narrow a search, and therefore they can safely be discarded. The indexer also ignores some punctuation and multiple spaces, as well as converting all letters to lowercase, to improve Google's performance.

Introduction of Search Engines Work

Basically there are two types of search engines. The first one is robots which are called crawlers or spiders. Search Engines is making use of spiders to index websites. As soon as you submit your website pages to a search engine by implementing their necessary submission page, the search engine spider will index your entire site. A ‘spider’ is an automated program specifically run by the search engine system. Spider visits a web site, read the content on the actual site, the site’s Meta tags and also follow the links that the site connects. The spider then returns all that information back to a central depository, where the data is indexed. It will visit each link you have on your website and index those sites too. Some spiders will simply index a specific number of pages on your site, so don’t create a site with 500 pages!

The spider will frequently come back to the sites to test out for any information that has changed. The rate of recurrence with which this happens is determined by the moderators of the search engine.

A spider is nearly like a book where it contains the table of contents, the actual content and the links and references for all the websites it finds during its search, and it may possibly index up to a million pages a day.

At the time when you ask a search engine to find information, it is essentially searching throughout the index which it has created and not truly searching the Web. Different search engines produce different rankings for the reason that not every search engine uses the same algorithm to search through the index.

One of the things that a search engine algorithm scans for is the occurrence and position of keywords on a web page, but it can moreover identify artificial keyword stuffing or spamdexing. Then the algorithms analyze the way that pages link to other pages in the Web. By glance how pages link to each other, an engine can both verify what a page is about, if the keywords of the linked pages are related to the keywords on the original page.

Meaning of Search Engines

What is Search Engine?
Internet search engines (e.g. Google, AltaVista) help users find web pages on a given subject. The search engines maintain databases of web sites and use programs (often referred to as “spiders” or “robots”) to collect information, which is then indexed by the search engine. Similar services are provided by “directories” which maintain ordered lists of websites, e.g. Yahoo!

How Internet Search Engines Work
The good news about the Internet and its most visible component, the World Wide Web, is that there are hundreds of millions of pages available, waiting to present information on an amazing variety of topics.

When you need to know about a particular subject, how do you know which pages to read? If you're like most people, you visit an Internet search engine.

Internet search engines are special sites on the Web that are designed to help people find information stored on other sites. There are differences in the ways various search engines work, but they all perform three basic tasks:


• They search the Internet -- or select pieces of the Internet -- based on important words.
• They keep an index of the words they find, and where they find them.
• They allow users to look for words or combinations of words found in that index.

Early search engines held an index of a few hundred thousand pages and documents, and received maybe one or two thousand inquiries each day. Today, a top search engine will index hundreds of millions of pages, and respond to tens of millions of queries per day.

Before a search engine can tell you where a file or document is, it must be found. To find information on the hundreds of millions of Web pages that exist, a search engine employs special software robots, called spiders, to build lists of the words found on Web sites. When a spider is building its lists, the process is called Web crawling. (There are some disadvantages to calling part of the Internet the World Wide Web -- a large set of arachnid-centric names for tools is one of them.) In order to build and maintain a useful list of words, a search engine's spiders have to look at a lot of pages.

How does any spider start its travels over the Web? The usual starting points are lists of heavily used servers and very popular pages. The spider will begin with a popular site, indexing the words on its pages and following every link found within the site. In this way, the spidering system quickly begins to travel, spreading out across the most widely used portions of the Web.