top of page
gtm.png

Scope
 

SEO Optimisation

Product Design

 

Objective
 

Swiggy to improve its search engine optimization to drive higher organic Traffic toward the platform.

Current stats
 

Swiggy gets 1.3 Mn Monthly non branded traffic (i.e. From queries that do not explicitly mention Swiggy in them) from Search
                                                  v/s
Zomato gets 10.6 Mn Monthly non branded traffic from Search

Zomato click share is 9X of Swiggy in Food and Bigbasket click share is 50X of Instamart.

We have identified 1406 top queries which contribute to 95% of the total search traffic for the Food category, there are more than 28,000 long tail queries that contribute to the rest of the traffic.

In this phase, we will be only targeting the 1406 queries as it will give the highest impact in the lowest amount of time
These queries can be classified into 3 separate buckets City/Area, brand/restaurants, and near me. 
In these queries our median rank is 17 the Goal is to get the median rank below 10

The Plan

To address each of these types of queries, Swiggy will build 500k (currently its less than 5k) auto-generated web pages like popular restaurants in the City/Suburb, Popular Brands (or Restaurant chain) outlets in the city/area, and Cuisine specific restaurant in the City/Area.

Goal

For the first phase, Swiggy is targeting to surface in the top 3 results for the top 600/1406 keywords, in the top 10 for 1160/1406 keywords, and in the top 30 for all 1406 keywords. This will drive 1.35 Mn sessions per month to the Swiggy platform

960x0.jpg

Indexing
 

Once a search engine processes each of the pages it crawls, it compiles a massive index of all the words it sees and their location on each page. It is essentially a database of billions of web pages.

Crawling

Crawling is the process by which search engines discover updated content on the web, such as new sites or pages, changes to existing sites, and dead links.
To do this, a search engine uses a program that can be referred to as a ‘crawler’, ‘bot’, or ‘spider’ (each search engine has its own type) which follows an algorithmic process to determine which sites to crawl and how often.
As a search engine’s crawler moves through your site it will also detect and record any links it finds on these pages and add them to a list that will be crawled later. This is how new content is discovered.

References

photo_2023-04-13_06-51-15.jpg

Zomato

photo_2023-04-13_06-53-24.jpg

Doordash

Final City Pages

photo_2023-04-13_07-00-56.jpg
photo_2023-04-13_07-06-01.jpg

You can check the actual website here:

Move to Top

bottom of page