The Ultimate Student’s Guide to Search Engines

By Alex Miller & Brian Graham

Table of Contents

1. Introduction
2. How Search Engines Work.
2.1. Function 1: Crawl Sites and Build Indexes.
2.2. Function 2: Provide Ranked Relevant Results Based on Queries.
3. The Difference Between a Database and a Search Engine.
3.1. Databases are Specialized.
3.2. A Search Engine is Generalized.
4. History of Search Engines.
4.1. Before Search and the Introduction of the Internet.
4.2. The First Engines Ever.
4.3. The Early Years of the 1990’s.
4.4. Directories Are Created.
4.5. End of the Golden Era of the 90’s and Birth of a Giant.
4.6. The Early 2000’s.
4.7. Monetization and Squeezing Competition.
4.8. The Counter to Paid Advertising: Search Engine Optimization.
4.9. The Rest of the History in the Late 2000’s.
4.10. Modern Day and Behind the Scenes Influence.
5. The Business of Google vs. SEO.
6. Future of Search Engines.
6.1. Integrated Technology.
6.2. Becoming more Man than Machine.
6.3. Continuing Specialization of Vertical Search.
6.4. Further Organization of Data.
7. Conclusion.
8. References.

1. Introduction

Twenty years ago, if you would have asked someone to search for something “online”, they likely would have given you a confused look. Today, search engines are so prominent that one brand of search engine, Google, has been turned into a verb to signify the process of using a search engine to find information. “Googling” something means to enter a query into the search engine’s query form and hit the return key to see what results you will get.

But search engine history goes back farther than just the advent of the World Wide Web (WWW) and the internet we know today. There are many stories that lead up to the creation of the first program that searched, the growth of the internet, and development of modern indexes. In this article you’ll discover what a search engine really is, how they came to be, and what they will look like in the future.

2. How Search Engines Work

The technology behind a search engine may seem simple enough. You type in a word or phrase and get returned a related website. However, it is really a complex mathematical algorithm working together with active programs that constantly update databases and indexes. It’s a living machine. Here are two main functions that they perform.

2.1. Function 1: Crawl Sites and Build Indexes

First, a program runs constantly behind the scenes. This program searches all the code and content on a website to determine what all the words, documents, and media are. Each website can be thought of as a single “document” or “page”. To access other pages, they have to be linked somehow. The program finds these links and begins jumping between the pages, continually storing and mapping the data into a large index.

Data Center Facebook

Photo of a Facebook server room, courtesy of Data Center Knowledge

The internet as we know today is a network of content built off of these links and pages. Sites are indexed using these programs, referred to as “spiders” or “crawlers”, which are programs that collect data from sites and build the indexes (which are databases) in an order that makes sense. As you can imagine, the data collected is quite large. In fact, the estimated amount of data now stored on the internet is over 1.3 trillion gigabytes. This amount of data is expensive and difficult to access, and it needs to be rapidly accessed for users, so data centers like the one pictured above exist all over the world.

Did You Know: Just one of Google’s data centers is over 3.5 football fields in size. That’s a lot of servers!1

2.2. Function 2: Provide Ranked Relevant Results Based on Queries

When users input a query to the search engine, it uses an algorithm and its index to return a list of pages it thinks you want to see. The order of these pages is determined by the algorithm, which tries to guess the relevancy of the pages to your query.

Today’s algorithms utilize hundreds of factors on the pages to determine its relevancy. In the old days, it merely matched words and returned whatever it had. As the web grew, obviously the quality of the algorithm had to evolve with it to try to bring the most relevant content to the searchers. Factors include site popularity, keywords, amount of content, links, and so on.

You may be wondering how to rank a website. It is directly related to these factors, and a good resource for this exists on The big search engines have also provided some information about how to rank on their engines.

Google’s Advice

Google insists on making pages for users, not search engines. The following are suggested ways to do this:

  • Be as natural as possible, using rich and high-quality content. Help the user answer their query rather than trying to rank for certain terms or products.
  • Don’t “cloak” information, which means to present different content on the page than the search engine sees.
  • Use a good site hierarchy (i.e. links) to allow bots and users to easily surf the page.
  • Fill in all descriptive and relevant meta data (such as title tags descriptions for all content and media).
  • Have a page that gives the user the fastest and most accurate response time.
  • Ideally, the content should be specialized and the site owner an in-depth expert on a particular subject.
  • The content should be unique on each page and not copied from anywhere else, and it should be up to date and the site should have lots of activity.
  • Google wants to contest “link buying” (paying others to link to your content), and looks for more natural linkage.

In summary, put the user and their experience first and provide useful, information rich resources on your site and its links, and make all of this content easy to find.

Bing’s Advice

The advice Bing provides is concise:

  • Use a clean URL structure
  • Don’t bury content in rich media (Flash, Java, Ajax, etc.) and don’t hide links in them either.
  • Don’t put text you want indexed inside images, and provide well-written, keyword rich content.

Industry Knowledge

As the companies tell us, there are over 200 factors that affect the ranking of your site. However, based on research in the SEO industry, 10 or so seem to produce the most weight. The law of Pareto strikes again: approximately 20% of the ranking factors provide 80% of the weight (of course these are rough numbers).

The important thing to take away is this: if you are creating unique content and accruing links from authorities in your field, then you will rank higher. Authority sites can be other websites in your industry, social media, Universities, and so on.

3. The Difference Between a Database and a Search Engine

To really understand search engines and search engine history, you must be able to differentiate between what a search engine and a database does. While similar, they do have distinct differences and should be used in an appropriate way.

3.1. Databases are Specialized

If you’ve ever been a member of a club or owned an email address then you’ve experienced a database. Databases are a collection of highly organized information, with a unique record for each piece of information. You were assigned a member number that tied into your information if you, say, signed up to a gym. Your email address is a unique address only you own.

In a database, each record has various fields that describe it. All entries will have the same fields, with different data describing each unique entry. Most importantly, these fields are maintained by humans. In the gym example, the fields might be member number, name, address, phone number, and bank information. Looking up info in a database often requires exact matching or use of “wildcards” to bring back a list of potential matches. There is typically only one “right answer” to your search. You can look for this answer using any of the fields, and often databases come with a feature to limit results using various criteria to make it easier.

Databases are used by businesses and individuals all over the world to store information. This might be a database for all of the products a company sells, organized by its own internal product number, name and description. It could also be a collection of academic journals or student records. There are countless databases that are managed by people all over the world, and without them there would be no organization of data.

3.2. A Search Engine is Generalized

Now, this may already begin to sound like a search engine, but again distinct differences exist. To begin, a search engine utilizes a database, but it is much less organized. A search engine collects data from everywhere, not just one specific topic. Each entry will not have the same fields, as in a database. The information in these fields is changing all the time. Because of this looseness, the entries cannot be realistically managed by a human, so instead it is managed by a program called a “spider”, which was mentioned above.

The real difference is in the goals of the user. Again, a database user will be looking for a specific entry, and there can be no other answer. In a search engine, the user typically looks for specific information, but may not know where that information will come from. Using a variety of keywords or perhaps a specific question phrase, the engines will return any results it thinks may be helpful; in this way, you don’t have to have an “exact” match, rather a list of helpful information. You don’t search by a particular field, rather the algorithm will try to guess what will be relevant to your query.

This algorithm manages all the fields built into their database, which in this world is called an “index”. The program is constantly running in the background to update results and find new information to change the index. It is an ever shifting storage of information, versus the very precise and managed database data.

Also as discussed above, it is known that there are variety of factors that go into this algorithm. Depending on what you search for, you may receive widely varying results. By manipulating your websites information, you can ensure the site shows up in a list of results based off certain keyphrases.

Legitimacy of results are not guaranteed at all due to the fact that it is managed by a program. Humans can make websites that will rank for various keywords and contain entirely false information or unrelated information. The engines currently provide no quality control for these sites and so, unlike peer reviewed journals, websites are often littered with questionable information. It is up to the user to determine the efficacy of the results they receive.

Examples of search engines are Google, Yahoo, and Bing. There are actually many others, but these are the most popular ones used today. These search engines are constantly updating their information and providing new results, and while they provide a portal to seemingly endless information, virtually none of the information is in their control or owned by them.

4. History of Search Engines

Now that you better understand the difference between the database and search engine, let’s look at the search engine’s roots. Even though most databases exist on computers, the idea of a database has existed as long as organized and stored information has. Search engines are a bit newer.

4.1. Before Search and the Introduction of the Internet

While it’s hard to imagine in modern day, before search engines, information was hard to come by. “Knowledge is power” was truly relevant, because those with access to the proper knowledge held all the power. Most information had to be obtained through books and libraries or from learning in universities.

The first idea for search came from a man named Vannevar Bush in July 1945. He wrote a paper urging scientists to come together to solve the problem of storing information once World War II was over. It was here that it was proposed to use mathematics to relate information, and thus the seed of the search engine was born.

His thought was that current databases only allowed you entry into data from one point, and after that you had to start again. But, he argued, the human mind works by association, so something must be created to mimic this and allow humans to gain knowledge in this manner. He argued that the human experience was expanding at a massive rate, and that rather than being a jack of all trades, man’s destiny was to find instead specialization if we want to progress. Human civilization had become so complex by this time that the storing of data, rather than bogging down our limited memories, was necessary to keep it straight.

Other important people include Gerard Salton and Ted Nelson. Gerard was an early father of the industry, doing work that would eventually end up as the SMART Information Retrieval System. He was probably most well known as introducing the vector space model into the industry, which is used by most engines. Ted Nelson is known for early effort in creating a network of computers, although his model differs from the WWW that is used currently. Most famously perhaps, he coined the term “hypertext” in 1963, leading the way for much of the lingo used today.

While the idea for the search engine was being worked, others were working on creating the network that would be able to connect all of this information. Early work on something called ARPAnet eventually led to the internet as we know it today. It was created out of a need to begin sharing information across large geographical distances which separated many scientists.

“The first data exchange over this new network occurred between computers at UCLA and Stanford Research Institute. On their first attempt to log into Stanford’s computer by typing “log win”, UCLA researchers crashed their computer when they typed the letter ‘g’.”, Inventors of the Modern Computer

Computers, at least as they are known today, were relatively new at this time. The most well-known early personal computer was the Apple I, followed closely by the IBM PC. It wouldn’t be until much later that personal computers would be able to access the internet as we know it today. In fact, Sir Tim Berners-Lee, an English Computer Scientist, founded the World Wide Web (WWW) in 1991 and this is the protocol used to connect the internet. Just a year before, the first search engine was actually born.

4.2. The First Engines Ever

At McGill University in Montreal in 1990, a university student named Alan Emtage was having trouble finding information for which he was tasked. In response, he automated the process by allowing anyone to search for the information using a query. He named this program “Archie”, and the first search engine was born.


While the engine wasn’t given much publicity at the time, others soon followed. Most of these were created by universities and their students, since this is where the concentration of research and technology in the field existed. Other forerunners were named Veronica, Jughead, and Gopher.

The first programs differed from those of today. The size of the internet was quite small at the time, and Universities created most of the first websites in 1993. Because there wasn’t much to index, these search engines merely had static indexes that retrieved results based on exact matches. It was when the internet began growing that the need for something more emerged.

Did You Know: If Alan had known the impact search would have, he could have patented the search engine. That would have meant all companies today would have paid him royalties for the technology.

The first crawler bot was created in 1993 by Matthew Gray. However, its original purpose had nothing to do with updating a search engine’s index; it was merely created to measure the growth of the new web. Soon, he saw that it could be useful and upgraded it to actually collect URLs rather than count them. Gray’s program was called the World Wide Web Wanderer.

At this stage, there was separation between the search engine and the web crawler. Search engines merely found and perhaps ranked web pages from an existing index, whereas web crawlers found and indexed new pages to add to their searchable database.

4.3. The Early Years of the 1990’s

By December 1993, there were three main search engines for the now-growing WWW. JumpStation was the first true modern search engine, and Jonathon Fletcher, the creator, originally built it to merely index the web. He noticed, while studying at the University of Stirling in Scotland, that there was no way to recognize when content was updated for search engines, without manually checking the index. His invention was the first modern web crawler, which ran out of things to index after just 10 days (it found 25,000 websites). Afterward, he built a search feature and put it up on Mosaic, one of the first browsers.

“The problem was that in order for developers at Mosaic to be aware of a new website, its creators would have to write to the National Center for Supercomputing Applications (NCSA) at the University of Illinois Urbana-Champaign – where the browser’s team were based.” –BBC, Jonathon Fletcher: forgotten father of the search engine

JumpStation had everything a modern search engine today has, however, the University would not fund his project and it shut down soon afterward. The other two search engines at the time (1994) were the World Wide Web Worm and the Repository-Based Software Engineering (RBSE) spider. Some say RBSE was the first to implement a ranking system, whereas the other programs merely returned results in the order it found them. It is difficult to say, because there were many engines being developed at this time.

Other notable early players were AltaVista and Lycos. AltaVista used a more advanced web crawler and had a better search feature due to advanced hardware. It eventually would power Yahoo’s search engine. Lycos began as a web portal with a good venture capital backing, and still exists today as a full-service web portal.

The 90’s were the golden age of the internet; companies came and went, websites began exploding, and search traffic began expanding. For the first three years, it grew at a rate over 70% and around 50% the rest of the 90’s. Compare this to a less than 10% growth in the last few years. It was still difficult to find exact information because whole pages were not indexed at this time. To solve this problem, along came a new spider called WebCrawler at the end of 1994. This bot allowed users to search any content on the page, which is what all current search engines allow. Due to the power of this search engine, it was bought out by AOL and Excite.

Competition soared, and search engines began trying to better organize and describe the data they were finding. Collecting websites became the game, and at this time the idea of the directory emerged. The directory added the human element to the game.

4.4 Directories Are Created

Yahoo! was the first player, starting in 1994. Starting as a collection of founders David Filo and Jerry Yang’s favorite sites, Yahoo! soon took off in the mainstream. They set themselves apart by inputting a human description of each of the URLs, earning better results. Yahoo! originally utilized others’ search engines to provide search results, including AltaVista and Overture.

Yahoo Original

Courtesy of Search Engine Journal

Yahoo! first made money by charging commercial sites for inclusion in their directory. This was a new idea at the time, when most sites were not yet making money. However, because it was good business for commercial sites to get recognized, they all began to head over to Yahoo! for entry.

LookSmart was another directory that opened to compete with Yahoo! at the time. Eventually, these two became overwhelmed with requests as the growth of the internet continued. Due to this, another directory was started in 1998 called DMOZ. This open directory project started still exists today, although with much less relevancy than when it started. DMOZ uses a free and open model to anyone who applies, and is ran almost entirely by volunteers.

4.5. End of the Golden Era of the 90’s and Birth of a Giant

Many more search engines were created in this era; too many to list. However, three more big players are worth mentioning. The first was called Ask Jeeves. Launched in 1997, the revolutionary site used human editors to match search queries. This worked well enough, and the site was the center of a lot of business deals throughout the years, eventually ending up being acquired for $1.85B and being rebranded as

The second was launched by the Microsoft Network (MSN) in 1998. MSN Search was powered by the search engine Inktomi, which had been launched two years prior. This was a web portal and the beginning of Microsoft’s play in the web industry, eventually leading to the building of their own search engine and rebranding to Bing.

Finally, it was during the end of this era that the last search engine worth mentioning was created. Two students of Stanford worked on a web crawler named BackRub in early 1996. These two went on to release a monumental research paper on the technology of search engines, and created an algorithm called PageRank that would begin ranking websites on the number of links they received from other websites, which they believed to be the most important factor of website quality at the time. Due to the success of their technology, Larry Page and Sergey Brin dropped out of school and on September 4, 1998, Google was incorporated.

Google in 1998

Courtesy of Wikipedia


Did You Know: Sergey and Larry managed to raise $1M just to open Google. Six years later, they became billionaires in the public offering of Google.2

4.6. The Early 2000’s

As the web grew more popular it grew more sophisticated. More technology and competition was popping up everywhere, forcing companies to try to push the limits. During this era of the internet, there were many mergers, acquisitions, and failures.

A new company started in 2000, although it may not be very well known in the USA. It is Baidu, and it is a Chinese company that acts as a full web portal and search engine using Chinese characters. It is the leading search engine in China today (70% market share with over 500M users), mostly due to the ban China imposes on websites. Also important to note is that most other countries have their own leading search engine brand. While the web may developed the quickest in the USA, and is the main focus of this article, there was a plethora of entrepreneurial spirit around the world with regard to the advent of the WWW.

Back in the USA, Ask Jeeves continued to grow in popularity and acquired Teoma to help improve local search results. In 2004, they acquired another big company, Interactive Search Holdings, then one year later they were acquired themselves for $1.85B by IAC, which owns many other websites like,, and

Competition began looking elsewhere in order to gain market advantage over the others. Two ways they did this were through meta and vertical search.

Meta Search Engines

Meta Search engines began popping up to take advantage of all the competition simultaneously, searching for the best result across multiple engines. A modern day version of this is, but the biggest name in meta search was a site called Dogpile, which still exists to this day.

Meta searching is done by creating a search interface that then uses a program to gather results from a multitude of search engines and resorts through their results via certain criteria. In this way they can take advantage of what others have already done. These companies tend to put ads into their organic results in order to pay for their service, so in cases of commercial queries it can be mostly ads. However, these are powerful tools to find hard to find information across multiple indexes.

Vertical Search Engines

It was during this time that the major search engines (Google, Yahoo!, and MSN) began fighting for the vertical search market since they had already established themselves in the regular market. A vertical search market is one focused on very specific segments, such as the answering of questions or finding videos.

Fighting for market share, the companies made deals with other companies to utilize their collection of premium content. This varied from lists of businesses to indexes of videos (eventually leading to YouTube in 2005). If you have a very specific market niche, then the vertical search may be more effective than a typical search request.

Did You Know: The YouTube algorithm has been confirmed to have over 1M lines of code!3

Yahoo! has ruled the question answering service with its Yahoo Answers platform. Launched in 2005, the community question and answer forum provided a natural language platform for asking and answering questions. Other similar vertical search examples are news searches, scholarly journal searches, maps, and images.

Google launched Google Base in 2005, which was a database of anything you wanted to submit, so that they could have users upload and describe items to put into the Google database. Using this, they could determine the best vertical search products to focus on and drive more traffic to their site.

4.7. Monetization and Squeezing Competition

Eventually, search engines sought out more ways to make money from their technology. The technology is a business, and this service needs to make money to survive. Investors had already put billions dollars into these companies by this time; it was time they started to provide a return.

The original method of monetization was the payment of a yearly fee for inclusion of your business in the search results. This allowed the startups to make money in their infancy while keeping their product free for the general public. However, this was not the most effective method.

Companies eventually turned to marketing. One of the first marketing methods used was the paid advertisement, known as the “banner ad”. First sold by a company named Global Network Navigator in 1993, banner ads were popularized by an online publication named HotWired. This method became very successful, to the detriment of companies who refused to see promise in any other methods.

Yahoo! was one of these unfortunates. A new monetization method, Pay-Per-Click (PPC), was conceived by a kid named Scott Banister, but he was dismissed by Yahoo! at the time. Yahoo! was making plenty of money off advertising and didn’t think it was a viable monetization system. Even Google and Microsoft failed to initially recognize the long term potential of this keyword system. Banister would finally get through to Bill Gross of IdeaLab. Bill launched GoTo, later called Overture, to use the model.

“The more I [thought about it], the more I realized that the true value of the Internet was in its accountability,” Gross tells me. “Performance guarantees had to be the model for paying for media.” – Bill Gross, The Search: How Google and Its Rivals Rewrote the Rules of Business and Transformed Our Culture

By focusing too much on advertisements for money, the companies lost focus on the search aspect of their companies. They began ignoring the small businesses that they didn’t realize represented a large part of their market, and others capitalized on this. A company called LinkExchange, which utilized the model (and acquired the company owned by the concept originator), was bought by Microsoft in 1998 for $265M.

Even after the sale and the potential Microsoft saw in the idea, it took over two years to get the keywords idea launched. Once launched, it became so successful that it quickly dwarfed the banner ad division. For reasons unknown, Microsoft saw this as a bad thing and shut down keywords after just a few months.

Google came to their senses and launched AdWords in 2000. Eventually, Overture sold to Yahoo! and they got on board with PPC. Now that competition had begun, innovation began. Google noticed that its flat-rate payment method wasn’t as effective as it could be, so it re-launched it similar to Overture’s model with auction pricing. They also added the clickthrough rate (CTR) into their algorithm for displaying ads.

4.8. The Counter to Paid Advertising: Search Engine Optimization

Search Engine Optimization (SEO) has a long history. While a rudimentary version of the practice is as old as the internet, serious SEO came into play around 2003. Up until then, people practicing SEO had found the loophole to ranking organically was to manipulate links and keywords on their pages. Infamously, Google rolled out its Florida update in November, 2003 which completely changed the game of ranking. Florida implemented a “link spam” filter to downgrade sites using this method of page ranking. After this, SEO became more of a science.

The practice of SEO helped pages to rank without having to pay for advertisements. This was of course a popular alternative for those trying to make money on the web. With a solidly built website, you could rank at the top of the Search Engine Result Pages (SERPS) and easily convert search traffic into customers without paying a dime of advertising.

SEO today is the business of analyzing and updating web pages to rank better in search results. This is done via analysis of links, on page content, and all the other known ranking factors for organic results. Its name has an unknown origin and was originally focused on the above mentioned “link-spamming” and “keyword stuffing”, as well as optimizing file names, page titles, and meta descriptions.

Link Analysis is a major part of SEO and has typically led Google to continue to update its algorithm. After the 2003 update came the “nofollow tag” update in 2005 to help blog owners curb spam links which lowered the ranking of their pages. Of course, it also helped webmasters improve the quality of their rankings, as this was an internet-wide update. Now, link value depends on the anchor text used, the number of links to other quality sites, the number of internal links, and the number of authority sites pointing to you. Of course, no one truly knows what the algorithm uses, so there are other unknown factors as well.

Google also introduced an “age” factor to combat the SEO industry, which is similar to a credit rating. This is to allow sites that have more experience (i.e. “age”) to rank higher than someone who just puts up a site and spams it with links and advertising to try to drive traffic, all which could be done in a day.

Another strategy used by SEOs was called Link Buying, which can still be found today in certain SEO companies. Practitioners buy high quality links to their sites directly from the authority. Search engines do not like this and try to strictly enforce policy, saying that if you have a paid link you must use the nofollow attribute. Failing to follow this can result in your domain getting completely banned, so it is a risky business. Hypocritically, it seems that the search engines themselves don’t seem to follow the policy.

SEO has been and will continue to be a driver in search engine algorithm updates, and therefore is an important part of the history of search engines. The industry itself is quite detailed and prevalent, employing an unknown, but likely large, number of people. It is highly sought after and an important part of internet marketing today.

4.9. The Rest of the History in the Late 2000’s

Finally, at the end of the second decade search engines were making consistent money. Many hard lessons were learned along the way, one being the famously massive IPO flop by Those who were able to learn from mistakes, or those who just happened to have a massive presence, stayed in the game. However, one company began to stand out above all the rest: Google.

Did You Know: One estimate shows that $5 trillion were lost in the dot com bust of the late 90’s!4

Yahoo! and MSN were losing ground. Yahoo! especially made a series of mistakes that lead to a total loss of ground to Google, which included the already discussed lack of vision for PPC and not putting enough resources into continually developing its search. MSN also made the mistake of missing the PPC ballgame, and just did not have a quality search. By July 2008, Google already had over 70% of the search market.


Courtesy of Net Market Share


Why didn’t others succeed where Google did?

Google focused on innovation in all areas. Undoubtedly, it had the best search and results. It was the best transactional site, utilizing and pioneering further improvements in the paid advertising model. Finally, it also did what Yahoo! had originally done and made itself a branded destination that people would want to go to. Google’s success was that it became the go to place for the web. For employees, the workplace became a hub of innovation. In business, they partnered with internet giant AOL.

The continued success of Google is their ever updating model of PPC and PageRank, providing more and more relevant content to the users. Google’s initial model was the flat rate PPC, which then changed to auction and ad CTR factor, disallowing double ads, then using landing page quality as a factor. They rolled out AdSense, which automatically places relevant ads per content, using text, images, and video. Google grew between 40-60% each year during this time, and from 2002 to 2009 it went from 41B queries per year to 792B. Yahoo! only had between 90-100B comparitively.

Across all platforms, Google updated to allow targeted marketing based on location, age, and other demographics. They also factored in the quality of the page where clicks come from in their cost, so that clicks from a low quality affiliate page versus an in depth review or other affiliate site will cost different amounts for the CTR.

Yahoo! did not update their model much and continued to have the same issues of incomplete search and outdated monetization model. Microsoft finally got on board and rebranded MSN to Bing with much needed technological improvements. Bing has come into its own, and while it is successful, it was too late to the game to beat out Google’s massive market share.

4.10. Modern Day and Behind the Scenes Influence

In just two short decades, information went from being almost entirely inaccessible to being available to anyone if they care to search for it. “Knowledge is power” no longer holds quite as much weight, since there is no shortage of ability to gain it. Search engines have completely changed the way the world and the economy works.

Although less palatable, it is important to understand that one the biggest influencers of search engines themselves has been the pornography industry. In a book titled The Erotic Engine, the author explores the real driving forces behind the monetization and where innovations can truly be traced to. For example, this industry heavily influenced the vertical search market and took in more than 70% of the $1.4B spent online in 2000.

“Pornographers pioneered affiliate marketing systems, cookies, file sharing, even “long tail” business models and email-based direct marketing—all things today’s online marketers associate with above-board marketing.”

Regardless of your feelings on the subject, the world has indeed changed due to the internet. It provides endless opportunities for entrepreneurs of all kinds. It has changed the way humans live their lives and created an entirely new type of lifestyle. It has allowed people to learn even without going to school. The technology will continue to develop and evolve business. But it’s no longer a matter of where to get information, as that battle has been settled.

This brings up an interesting point as well: how does someone like Google ensure that you’re seeing relevant information when there is so much spam out there? Google has built anti-spam technology into their algorithm to demote pages that are merely trying to rank rather than being legitimate websites. However, they actually employ a division of folks that do this for them manually as well. Without this team, you would likely see many unwanted results in your search queries, which goes to show how much effort Google puts into providing the best service.

The Big Three Left Standing

Today, there are three main search engines in the USA that have withstood the test of time and are widely used. These are Google, Yahoo!, and Bing. While there are many other search engines out there, these three have the most influence over the users of the internet.

Yahoo! was one of the original pioneers but has lost much of its market share due to its inability to keep up with the changing market. In fact, Yahoo! search is even now powered by Bing due to its failure to develop its own search. Bing owns about 19% of the search market, Yahoo 10%, and Google about 68%.

Google is Paving the Future

The company that runs the show has had an interesting history. Like the other companies, theirs was begun while they were young college students at Stanford. Perhaps different than the others, the co-founders had an appetite for trying to change the world:

“The name—a play on the word “googol,” a mathematical term for the number represented by the numeral 1 followed by 100 zeros—reflects Larry and Sergey’s mission to organize a seemingly infinite amount of information on the web.” –Google History

Google is constantly testing every aspect of its algorithm and business model, even looking at micro changes like yellow versus blue stars on the SERPS. It is known that there are over 20,000 human reviewers of the search engine that help keep its results the best in the market. Google has rolled out large, influential updates over the years that continually keep it in front of the curve. These include Google Panda, Penguin, Pigeon, Florida, Caffeine, Hummingbird and many more. Even the names seem to add a mystique to the company.

There are actually two different categories of updates Google has made. One group is algorithmic updates, which change the way Google applies page ranking and quality. These include Florida, Pigeon, Penguin, and Panda. The other are structural updates, which change the way Google understand queries and display results. These allow Google to understand longer keyphrases and natural language more effectively, in turn providing more relevant results and often directly providing an answer in their Knowledge Graph.

Reports say search traffic is ever evolving, and that 40-50% of queries each day are brand new searches that Google has never seen before. They are slowly incorporating more of a paid model in their SERPS, versus their original organic-only version.

“We are not-so-slowly but surely moving away from the “free search traffic” phenomenon. It was pretty stupid to guarantee rankings in the past — it will be impossible to do so in the future. This may result in fewer companies willing to invest in the long-term benefits of SEO. It’s much easier (and in many cases more reliable) to invest in PPC (both in search and social), so internet marketing companies need to embrace PPC management if they haven’t already.” –Ann Smarty

Finally, they have provided innovative technology such as Google Maps and Google Earth, they changed the game by acquiring and massively improving YouTube, and they’ve even begun moving into technology that is outside its original purpose, such as automated cars and defense industry robotics. The company has done incredible things, in less than 20 years, that took other companies 50-100 years to achieve.

Did You Know: Google maps has taken over 21M gigabytes of data, tens of millions of photos, and over 5 million miles driven (or ridden on bicycles, trolleys, or snowmobiles!) to build.5

5. The Business of Google vs. SEO

Google is a large, publicly owned corporation. While they don’t appear to be the same as some of the other giants out there like Wal-Mart or General Motors, they still have an obligation to make money and return the investment for their shareholders. This puts them in an interesting position with regards to the SEO industry and organic search in general.

It is well known that your rankings have a direct impact on the amount of revenue your company earns. The CTR improves massively as you reach the top 5 rankings in the SERPS. Some estimates are that ranking below 5 reduces the chances of getting clicked on by 70%.

Businesses can choose between trying to rank organically or just plain using advertisements, or a mixture of the two. While both markets can be volatile, the organic market can have massive consequences on the business when Google changes its algorithm. Overnight, your business could drop out of the rankings and you could lose all your potential new customers. One can see that it is imperative when using this model to stay up-to-date on the latest improvements by Google and have an SEO professional if you are not one yourself to help you monitor the business.

So why does Google update its algorithm? Sure, there may be honorable reasons behind doing so. But there is also a lot of evidence to show that Google is merely employing smart business themselves. If most of Google’s revenue comes from PPC and other advertising, what benefit does organically ranking your website offer them? None, it seems. Some analysts have shown that each algorithm update has been followed by a promising stock price increase. Keeping the SEO industry guessing is smart business and could be one determining factor behind all the updates.

Did You Know: Current estimates show that Google processes 3.5B search queries a day, which translates into 40,000 per second. Google brought in $66B revenue in 2014 and has grown ~20% each year recently. After its expenses, it made $14B in profit in 2014, 90% of which is estimated to be from advertising.6, 7, 8

6. Future of Search Engines

It appears the web is here to stay and is only growing larger and larger. Search traffic is currently done in three different ways: informational (seeking info about a topic), transactional (shopping and downloading), and navigational (direct to URL). Of course, at some point in time a new technology will replace our “primitive” search engines, but until then, here is a look at where search is going.

6.1 Integrated Technology

Mobile search has been slowly increasing as the hardware enabling it improves. Smart phones are almost everywhere and still expanding rapidly. The evolution of the smart phone may bring new changes to search technology, and Google’s new mobile-readiness update signals this potential.


Courtesy of Smart Insights


Smart phones have had everything integrated before, but now with the addition of modular phones, you can choose which technology you need and want. Mobile technology allows businesses to target potential customers in new ways. Future search will likely rely on this heavily.

6.2 Becoming more Man than Machine

New search engine technology will try to anticipate needs rather than wait for search queries. Technology is being developed to more fully utilize the senses (vision, hearing, touch) and understand the brain (rationale). We may end up with SkyNet before we know it.

Visual Search engines already exist. Have you ever been on Facebook and had it suggest a tag for you? That is application of visual search technology. Facial recognition technology has actually been being developed since 1965, and fingerprint technology since 1946. However, these were databases rather than search engines, and now with this software being developed in commercial settings, we can expect to see more from it.

Voice activated search was first made popular by Siri, the digital assistant on the iPhone in 2011. Google Now was launched shortly after. Microsoft also has voice search called Cortana. The point of these is to give the results in a natural language voice, which is a novelty but a large step towards natural language use in technology. Compare Steven Hawking’s voice modulator versus these.

One element of the voice search game is Question Answering, where by speaking a natural language question, the program can understand and spit back out a contextual answer. This type of system is built off a knowledge base of language, which includes texts, documents, web pages, news reports, Wikipedia articles, etc. Wolfram Alpha is an example of one of these knowledge databases, which spits out an answer rather than relevant search results.

6.3 Continuing Specialization of Vertical Search

As all things become more specialized, search engines try to capture these markets. While the dominant three are the most popular, for those looking for a specific niche, many different types of vertical exist. Examples include YouTube, Flickr, and Pinterest. Perhaps one of the newer ideas is of a faith-based engine. This allows followers of certain faiths such as Islam, Judaism, and Christianity to perform searches within the parameters of their faith.

6.4 Further Organization of Data

Google Structured Data Markup and Knowledge Graph integrate known data about a search term and help you to more quickly find and contextualize the information. For instance, searching for a movie brings up a box that tells you the movies ratings from various rating authority sites, it gives you the actors names and pages, it gives you a synopsis, and it also gives you a one-click path to find the movie online available for streaming. It also then gives you other relevant searches.

This type of technology may reduce the number of organic search results being provided as the search engines try to guess the best results for you. This aligns with the fact that there are more ads being shown and signifies the moving away from the purely organic search model.

7. Conclusion

Beginning with a simple idea from a scientist in World War II, the collection of data and building of a network called the internet was born. The early 1990’s were the playground while the industry was in its infancy. Early search engines and web crawlers performed different tasks until the big players came of age in the late 90’s and realized that these two needed to work in tandem to provide the best service.

Many young entrepreneurs are credited with helping to kick-start the industry, including Alan Emtage (Archie), Matthew Gray (World Wide Web Wanderer), Jonathon Fletcher (JumpStation), David Filo and Jerry Yang (Yahoo!), and finally Larry Page and Sergey Brin (Google). These young college students saw opportunity and built upon the ideas of others to come up with truly remarkable technology that is a commodity today.

The story of developing a successful business is another thing. While MSN and Yahoo! were very successful companies in search at their prime, they failed to have the business sense to innovate their monetization models and unfortunately were squashed by the giant that is now Google. Using this initial success, Google has maintained its dominance and focus on innovation and today is even named the #1 Best Company to Work For by Forbes Magazine. It has held this position for six years straight.

Search engines use a variety of tools to make money to fund their tools. Originally, a pay for inclusion model was used to be added to the directory and index. Next came the banner ad, which was on online billboard advertising another website. Next came the PPC model, which changed the game by allowing the search engines to get money for particular keywords and phrases. Adding to that, the model was updated to an auction instead of a flat rate to ensure competition kept the markets accountable. Finally, while marketing is the main avenue of revenue, these businesses make money off their other services as they expand their offerings, such as email, cloud space, and software.

Other search engines exist around the world, like Baidu, Alibaba, and Yandex. However, Google is still larger than any of these and also holds a presence in most foreign countries using the internet. Yahoo! and Bing hold considerable market share when compared with smaller search engines around the world, and most search is done on the search engines listed above. AOL, and Lycos still account for a small portion of search traffic.

The technology is still changing rapidly, and search is now integrated into mobile devices, the other large expanding market of today. This gives search seemingly endless reach. Search engines also capitalize on specialized markets by creating vertical search engines to search for video clips, airline flights, jobs, and countless other data. While it isn’t known how search will evolve in the next 5-10 years, you can expect it to continue to be a large part of your daily life.

There is no doubt that search has changed the lives of millions of people. It allows access to an unlimited amount of information, as long as you know how to find it. It is only becoming more accessible. Certainly, one can argue pros and cons of this, but for those using it for the good of mankind and in positive ways, it has profound implications. It is unknown how much more things will change and the impact to come, but at least now you have a more thorough look at where it evolved from.


8. References

Did You Know References: