Spearman Rank Correlation for Semantic Similarity: A measure of correlation between ranked lists

Spearman Rank Correlation for Semantic Similarity: A measure of correlation between ranked lists

SUPERCHARGE YOUR ONLINE VISIBILITY! CONTACT US AND LET’S ACHIEVE EXCELLENCE TOGETHER!

    This project explores the application of Spearman Rank Correlation as a statistical measure to evaluate semantic similarity between blog content across multiple companies. By leveraging modern language models to assess how closely each company’s content aligns with a target  search query, the project enables a comparative analysis of SEO strategies based on relevance and thematic focus.

    Spearman Rank Correlation for Semantic Similarity

    The analysis processes blog detail pages from selected SEO companies. It converts the textual content of each blog into dense vector representations using a semantic embedding model, then ranks each blog by its relevance to a predefined query. The ranked lists for each company are then compared using Spearman Rank Correlation to assess alignment or divergence in content relevance.

    The final output highlights the semantic positioning of each company’s content, offering a data-driven lens into competitive strategy, content optimization opportunities, and overall thematic alignment with search intent.

    Project Purpose

    The purpose of this project is to provide an objective, interpretable metric to compare how different SEO companies position their content in relation to a specific search intent. Instead of relying solely on keyword-based metrics or surface-level content analysis, this project adopts a semantic ranking approach — enabling deeper insights into how well each company’s blog posts address a common query.

    Key goals include:

    • Identifying content relevance gaps between companies for the same search intent.
    • Quantifying alignment in content strategies using rank-based correlation.
    • Offering a structured framework for semantic content benchmarking in SEO contexts.

    This approach serves digital marketers, SEO teams, and content strategists seeking to benchmark their own content against competitors, identify areas of strength, and uncover opportunities for improvement through relevance-based semantic comparisons.

    Understanding Semantic Similarity in SEO Context

    What does “semantic similarity” mean, and why does it matter in SEO?

    Semantic similarity refers to how closely two pieces of text relate in meaning — regardless of whether they use the exact same words. In SEO, this is critical because users don’t always search using the same phrases. They may ask:

    • “How to get more traffic to a blog”
    • “Ways to boost site visits using SEO”
    • “Improve website visibility organically”

    Though phrased differently, the underlying intent is the same. Search engines like Google are increasingly optimized to detect this semantic equivalence, not just keyword matching.

    This project uses advanced language models (Sentence-BERT) to capture semantic meaning. It evaluates how well blog articles align with a given user intent — expressed as a natural-language query — and ranks them by their semantic relevance.

    In SEO, this helps in:

    • Optimizing content for user intent, not just keywords.
    • Detecting missed opportunities to cover a topic more deeply.
    • Comparing how well competitors align with the same audience query.

    Understanding Spearman Rank Correlation in This Project

    What is Spearman Rank Correlation, and why is it used here?

    Spearman Rank Correlation is a statistical method used to measure the strength and direction of a relationship between two ranked lists. Unlike other correlation methods that rely on the actual values (like Pearson), Spearman focuses only on the relative order of values.

    In this project, each company has a set of blog URLs. These URLs are scored for semantic relevance to a query, then ranked. Spearman Rank Correlation then compares how similarly two companies rank their blog content in response to the same query.

    Here’s why Spearman is ideal:

    • Value-agnostic: It doesn’t care about how high or low the similarity scores are — just how content pieces are ordered.
    • Cross-company friendly: One company’s scoring scale may differ slightly, but ranks are easier to compare reliably.
    • Strategic insight: A high Spearman correlation means two companies structure their content similarly around a query. A low or negative value signals differences in content focus or quality.

    This helps in:

    • Measuring overlap or uniqueness in content strategy.
    • Understanding which competitors are aligned with similar search intents.
    • Identifying content gaps or areas of over-saturation.

    For example:

    • Correlation of 1.0 = perfect alignment in ranked content structure.
    • Correlation near 0.0 = no meaningful alignment — unique strategies.
    • Negative correlation = inverse strategy — what one prioritizes, the other ignores.

    How does combining Semantic Similarity with Spearman Correlation provide a competitive advantage?

    By combining semantic relevance (from NLP models) with rank correlation (via Spearman), the analysis reveals both quality and strategic similarity. It enables:

    • Relevance scoring: Which articles are most semantically aligned with the user query?
    • Rank-based comparison: How do these relevance patterns compare across companies?
    • Strategic insight: Are competitors targeting the same intent, or taking different paths?

    This layered insight cannot be achieved by using keyword-based tools or traffic estimates alone. It provides a deep, intention-aware lens into how content strategies perform relative to real user needs.

    How can this project help improve content strategy across multiple websites?

    This project allows direct comparison of how different companies are performing in terms of semantic relevance to a common search intent. By ranking the blog content of each company and comparing these rankings, it becomes clear which company is producing the most query-relevant content.

    This enables content teams to:

    • Benchmark content effectiveness against competitors.
    • Identify strong-performing articles worth amplifying or repurposing.
    • Spot gaps or weaknesses where content fails to address user needs.
    • Prioritize rewriting or retiring outdated posts based on semantic underperformance.

    How does this support long-term SEO planning and content optimization?

    By applying this method across different queries or over time, the insights can shape a robust content roadmap. It provides data-backed answers to critical questions like:

    • Are we improving our alignment with target user intent?
    • Are we catching up to, or diverging from, our competitors?
    • Which topic clusters need more investment or refinement?

    Over time, tracking changes in Spearman Rank Correlation can highlight whether strategic adjustments are moving content in the right direction or need reevaluation.

    What’s the unique edge this project provides over traditional SEO tools?

    Most SEO tools focus on surface-level metrics: keyword density, backlinks, or ranking positions. This project dives deeper — analyzing semantic depth and competitive intent alignment using NLP models and statistical ranking comparison.

    By combining:

    • Transformer-based semantic understanding (via Sentence-BERT),
    • With ranking correlation (via Spearman),

    The analysis becomes intent-aware, competitive, and strategic — going beyond what typical SEO dashboards can offer.

    Libraries Used in This Project

    requests

    Purpose: Fetches raw HTML content from web pages.

    Role in the project: Used to retrieve the full content of blog detail pages from different websites. It allows the project to automate the data collection process by sending HTTP requests and capturing responses, which include titles and textual content.

    BeautifulSoup (from bs4)

    Purpose: Parses and extracts structured data from HTML content.

    Role in the project: Used to cleanly extract readable text — particularly from <title> and <p> (paragraph) tags — after a page is downloaded. This ensures that only meaningful, visible content is passed on for semantic analysis.

    sentence_transformers

    Purpose: Enables use of pretrained transformer models fine-tuned for semantic textual similarity tasks. Key components used:

    • SentenceTransformer: Loads transformer models like msmarco-distilbert-dot-v5.
    • util.dot_score: Calculates cosine-based dot-product similarity between query and content.

    Role in the project: This is the core NLP engine. It converts text (queries and blog content) into dense semantic vectors, which are then compared using similarity scoring. This is what allows the system to understand meaning beyond keywords.

    scipy.stats.spearmanr

    Purpose: Performs Spearman Rank Correlation — a non-parametric statistical test.

    Role in the project: Used to compare the rank order of blog content across companies. For each company, blog pages are ranked by their semantic relevance to the query. Spearman Correlation compares these rankings to assess alignment or divergence in strategy.

    numpy (np)

    Purpose: Supports efficient numerical computations and array manipulations.

    Role in the project: Used to generate rank arrays (via np.argsort) and handle numerical operations related to sorting and indexing of similarity scores. This is essential for preparing data before correlation analysis.

    itertools.combinations

    Purpose: Generates all possible pairwise combinations from a list.

    Role in the project: Used to compare every pair of companies without repeating combinations. This helps in running Spearman Rank Correlation efficiently across all company pairs, such as (Company A vs B), (B vs C), etc.

    Function Explanation fetch_content: Web Content Extraction for Semantic Scoring

    This function plays a foundational role in the project — it gathers the actual blog content that will later be scored for relevance to the SEO query. The success of semantic similarity scoring and ranking depends on the quality and completeness of this initial content extraction.

    Breakdown:

    ·         response = requests.get(url, timeout=10)

    • Sends an HTTP GET request to the blog page’s URL.
    • Downloads the full page content.
    • timeout=10 ensures the request doesn’t hang indefinitely if a site is unresponsive.

    ·         soup = BeautifulSoup(response.content, ‘html.parser’)

    • Parses the raw HTML using Python’s built-in HTML parser.
    • Converts the HTML into a tree-like structure that makes it easy to navigate and extract elements like <p> or <title>.

    ·         title = soup.find(‘title’).get_text(strip=True) if soup.find(‘title’) else ”

    • Extracts the page’s <title> tag — which often summarizes the topic of the blog post.
    • strip=True removes any surrounding whitespace.
    • If <title> isn’t found, it returns an empty string gracefully.

    ·         paragraphs = soup.find_all(‘p’)

    • Finds all paragraph tags (<p>) on the page.
    • These typically hold the bulk of the actual content.

    ·         content = ‘ ‘.join([p.get_text(strip=True) for p in paragraphs])

    • Loops through every paragraph and extracts its visible text.
    • Joins all the paragraph texts into a single string, separated by spaces.

    ·         return title + ” ” + content

    • Concatenates the blog’s title and body text into one long string.
    • This combined string becomes the input for semantic vector encoding in the next step of the pipeline.

    Function Explanation compute_similarity_scores

    This function measures how semantically relevant each webpage is in relation to a target query. By comparing high-dimensional vector representations (embeddings) of the query and content, this function generates numerical relevance scores. These scores help rank which content is most aligned with the search query, enabling deeper insights for SEO performance evaluation.

    Breakdown:

    ·         query_embedding = model.encode(query, convert_to_tensor=True)

    • Converts the input query into a dense vector using a transformer-based model. This vector captures the semantic meaning of the query, not just individual keywords.

    ·         content_embeddings = model.encode(contents, convert_to_tensor=True)

    • Converts each content item (blog or article) into its own semantic vector representation.

    ·         cosine_scores = util.dot_score(query_embedding, content_embeddings)[0].cpu().numpy()

    • Measures semantic relevance using dot product between the query and each content embedding. Higher scores indicate stronger similarity. The [0] ensures proper extraction from the score matrix, .cpu().numpy() prepares it for further numeric analysis.

    ·         return cosine_scores

    • Outputs a list of similarity scores that can be ranked and later compared using Spearman correlation.

    Understanding Similarity Score Used in this Proejct

    In this project, the dot_score used to calculate similarity scores.

    What is dot_score?

    The dot_score function measures similarity between two vectors by calculating their dot product. In the context of semantic embeddings (vectors derived from textual content), this score reflects how aligned the meanings of two texts are.

    How It Works:

    Each piece of text — whether a query or a document — is represented as a high-dimensional vector. These vectors are not random; they are trained to encode semantic meaning.

    The dot product between the query vector and a content vector increases when:

    • The vectors point in the same direction (i.e., have similar meaning),
    • And the vector magnitudes (representing informational strength or density) are large.

    Mathematically:

    dot_score(A,B)= ∑i=1 n Ai ⋅ Bi

    This is different from cosine similarity, which normalizes vector lengths and measures the angle between them. Dot score keeps magnitude in play — making it more sensitive to how strong or informative the match is, not just how “directionally similar” it is.

    Why Dot Score Was Used Here:

    • Model Choice Compatibility: The selected Sentence-BERT model performs well with dot scoring because the embedding space is trained to encode semantic strength, not just direction.
    • Practical Insight for SEO: Dot score gives more expressive ranking scores which help in distinguishing content relevance with more granularity.
    • No Need for Normalization: Unlike cosine similarity, which requires normalized vectors, dot score can reflect both meaning and confidence in one number.
    Why This Matters for SEO

    This scoring method allows measuring how relevant each piece of content is to a user’s search intent, not just based on keyword matching, but by understanding underlying semantics. Using this relevance as a basis for ranking improves insights into content alignment and optimization strategy, especially when comparing between multiple websites.

    This function brings all key components of the project together. It compares how semantically similar two company website contents are, based on how relevant their content is to a given search query. The outcome is a ranking for each company and a final Spearman correlation score, which reflects how aligned those rankings are in terms of semantic relevance.

    Key Highlights:

    • Content Extraction: Website contents are scraped from both sets of URLs.
    • Semantic Scoring: Website contents are scored using dot product similarity.
    • Ranking: Website contents are ordered by relevance score.
    • Spearman Correlation: Measures similarity in ranked order between the two sets.
    • The final output shows ranked lists and a correlation score indicating content alignment.

    This function loads the Sentence-BERT model used to compute similarity scores between the search query and blog content. The selected model—msmarco-distilbert-dot-v5—is fine-tuned specifically for retrieval tasks using dot product scoring, making it ideal for identifying the most relevant content for a given search intent.

    Key Highlights:

    • Pretrained & Optimized: The model is trained on large-scale datasets for semantic search.
    • Dot Product Friendly: Performs well with util.dot_score, providing numerically meaningful similarity scores.
    • Plug-and-Play: Once loaded, the model can directly be used for encoding both queries and blog content.

    Understanding the Semantic Similarity Model

    What Model Powers This Project?

    The backbone of this project is a Sentence-BERT model, specifically a fine-tuned variant called:

    msmarco-distilbert-dot-v5

    This model comes from the SentenceTransformers library and has been trained for tasks that require semantic matching, such as:

    • Finding how similar two texts are,
    • Ranking search results,
    • Matching queries to documents based on meaning — not just keywords.

    Model Architecture Breakdown

    The model is composed of two major components:

    Transformer Encoder

    DistilBERT is used here as the encoder, a lightweight version of BERT that retains 95% of its performance but is faster and smaller.

    • Task: Converts raw text into a high-dimensional numerical vector (embedding) that represents its semantic meaning.
    • Max Sequence Length: 512 tokens (enough for most query and paragraph pairs).
    • Case Sensitivity: Preserves capitalization (e.g., AI vs ai), which helps in certain domain-specific contexts.

    This step turn a paragraph into a rich “meaning vector” using deep language understanding.

    Pooling Layer

    After the Transformer generates embeddings for each token in the input, a Pooling strategy is applied to compress all token vectors into a single sentence-level vector.

    ·         Pooling Mode Used: Mean Pooling

    • The model averages all token embeddings to produce one fixed-size vector for the entire sentence or paragraph.

    ·         Embedding Dimension: 768

    • Each text input is converted into a 768-dimensional dense vector.

    Mean Pooling helps summarize the overall meaning of the full sentence or passage, making it ideal for tasks like ranking and similarity comparison.

    Why This Model Was Chosen

    Dot Score Optimization: This model was specifically fine-tuned using dot product similarity (dot_score) rather than cosine similarity. That means it directly learns to maximize the dot product between related items, leading to more accurate semantic ranking.

    Domain-Specific Fine-tuning: Fine-tuned on the MS MARCO dataset — a real-world dataset with web search queries and relevant documents — which aligns closely with SEO and content ranking needs.

    Performance and Speed: The use of DistilBERT allows fast inference without sacrificing semantic quality.

    Once both the query and contents are encoded into embeddings, they are compared using the dot product. This produces a score for each web content that reflects its semantic similarity to the query. These scores form the basis for Spearman Rank Correlation calculations between websites.

    Result Analysis

    Website Rankings Based on Semantic Similarity

    The model generates semantic similarity scores between the query “Best SEO services” and the content of each webpage. These scores reflect how closely the content of each page matches the meaning of the search query.

    Website Rankings 1 (Company 1 – ThatWare)

    1.    https://thatware.co/web-development-services/: Score 72.1150

    2.    https://thatware.co/advanced-seo-services/: Score 53.5043

    3.    https://thatware.co/social-media-marketing/: Score 53.5043

    For ThatWare, the page “web-development-services” ranks the highest, with a score of 72.1150, indicating it is the most relevant to the query. The other pages related to SEO and social media marketing score lower at around 53.5, suggesting they are less relevant to the search intent behind “Best SEO services.”

    Website Rankings 2 (Company 2 – TechWebers)

    1.    https://www.techwebers.com/seo-services/: Score 78.2933

    2.    https://www.techwebers.com/custom-web-development/: Score 72.7930

    3.    https://www.techwebers.com/social-media-marketing/: Score 72.4986

    For TechWebers, the “seo-services” page ranks highest with 78.2933, followed by “custom-web-development” and “social-media-marketing”, with scores slightly above 72. These rankings suggest that TechWebers provides a clearer and more relevant match to the search query compared to ThatWare.

    Spearman Rank Correlation

    The Spearman Rank Correlation measures the similarity between the rankings of the two companies for the same query. In this case, the correlation value is calculated between the rankings for the query “Best SEO services”.

    Spearman Correlation: 0.5000

    A correlation of 0.5000 indicates a moderate positive relationship between the rankings of both companies. This means that both websites’ content ranks similarly in terms of relevance to the query, but they also have differences in how they rank their pages.

    Interpretation: The websites share some commonality in what they consider most relevant for “Best SEO services”, but also diverge in their rankings for some URLs. This is expected due to the different approaches each company may take in their content creation and SEO strategies.

    What Does This Mean for SEO?

    Relevance: Higher similarity scores generally imply that the content is more semantically relevant to the search query, improving its chances of ranking higher in search results. Websites with higher similarity scores to the query should ideally attract more organic traffic for the given search term. Rank Variations: The correlation of 0.5000 suggests that while there are similarities in how both companies rank their content, there are also notable differences. Understanding these rankings helps refine SEO strategies and adapt content to better target the user’s search intent.

    Ranking and Scoring Overview

    Each webpage is assigned a similarity score based on its relevance to the query. These scores reflect how closely the content matches the search intent behind the query “How to increase website traffic using SEO best practices”.

    Higher scores indicate that the content of the webpage is more relevant to the search query, meaning these pages are more likely to rank well in a search engine for this topic.

    For example, pages with higher scores typically cover SEO best practices, traffic growth strategies, or SEO tools that directly answer the query. On the other hand, pages with lower scores may touch on related topics but lack a strong focus on the specific SEO techniques needed for traffic growth.

    Website Rankings and Relevance:

    When comparing the rankings of each company’s pages, it is evident that both companies prioritize certain content that aligns well with the search query. Content that is more relevant to increasing website traffic through SEO practices is ranked higher, while other topics might rank lower due to less direct relevance.

    Here’s how the content ranks for each company:

    1.    Company 1:

    • The highest-ranking pages will likely be those that focus on SEO optimization or Google Business Profile updates, which are closely related to increasing website visibility and traffic.
    • Pages with a general focus on SEO tools or insights into specific Google updates may rank lower but still hold value for traffic strategies.

    2.    Company 2:

    • Pages that explain divi SEO, full-service SEO strategies, and SEO keyword targeting are expected to score higher, as these topics directly address the best practices for driving traffic through SEO.
    • Similarly, pages focusing on local SEO or audience targeting through ads will also be relevant but might score lower compared to more SEO-specific strategies.

    3.    Company 3:

    • Pages discussing SEO for schools and video ranking improvements may appear on top, as they likely contain tailored, actionable advice for specific SEO best practices.
    • Pages with more general SEO analyses will follow, still providing value but not as directly related to increasing website traffic.

    Spearman Rank Correlation:

    The Spearman Rank Correlation measures the similarity in how each company ranks its webpages for the query. The results are represented as a correlation value between -1 and 1, where:

    ·         A positive correlation (0.5 to 1.0) indicates that the rankings between the two companies are similar, meaning both companies prioritize content in a similar way for this particular query.

    ·         A negative correlation (-1 to 0) suggests a divergence in how the companies rank their content, indicating that they have very different approaches or priorities for the content related to the search query.

    For this specific query, we observed the following relationships:

    • Company 1 vs Company 2: There is a moderate positive correlation (0.3), indicating some alignment in how both companies rank their content. While they share a focus on SEO best practices, there are also notable differences in their content strategies.
    • Company 1 vs Company 3: A negative correlation (-0.2) indicates that their content strategies differ significantly for this query. While Company 1 may focus more on Google Business Profile and SEO tools, Company 3 might emphasize more specific practices like SEO for schools or video ranking.
    • Company 2 vs Company 3: A strong positive correlation (0.7) indicates that both companies have very similar content priorities and ranking strategies for this query. Their content might overlap in SEO best practices and strategies to increase website traffic.

    What This Means for SEO Strategy:

    • Similar Rankings: When companies show a positive correlation in rankings, it suggests that both prioritize similar SEO practices. This is useful for companies looking to understand the competitive landscape. If two companies rank similarly, they are likely targeting the same audience and optimizing for similar keywords and SEO tactics.
    • Divergent Rankings: A negative correlation between rankings highlights areas where companies differ in their content strategy. In this case, Company 1 might focus on more general or tool-based SEO techniques, while Company 3 might be more niche-focused. This can provide insight into how diverse SEO strategies can still be effective based on the audience they are trying to attract.
    • Optimizing Content: Understanding these correlations helps to refine content strategies by seeing which approaches (broad SEO practices vs. niche techniques) are more likely to resonate with users. For example, if the correlation with competitors is low, it could indicate an opportunity to differentiate the content further.

    What is the practical benefit of using semantic similarity in SEO?

    The practical benefit of semantic similarity in SEO is that it enables a deeper understanding of how well your content aligns with user intent. Traditional SEO focuses primarily on keyword matching, but semantic similarity goes beyond this by assessing the meaning behind the words on a webpage. This approach is crucial because search engines like Google have evolved to prioritize content that satisfies the intent of the query, rather than simply matching keywords.

    By leveraging this technique, businesses can:

    • Create more relevant content: Ensure your web pages directly address what users are looking for, not just based on keyword matches but based on what the content is truly about.
    • Improve content targeting: Prioritize topics that align with what users search for, increasing chances of ranking higher for queries that matter to your audience.
    • Enhance ranking potential: Search engines like Google use sophisticated algorithms that focus on the context and relevance of the content, so optimizing for semantic relevance can boost organic search visibility.

    How does the Spearman correlation help in analyzing SEO performance?

    Spearman rank correlation is a statistical method that helps analyze the relationship between rankings of multiple webpages for a given query. It compares how similar or different the ranking orders are between different websites.

    In the context of SEO, Spearman’s correlation tells us whether different websites prioritize content similarly or differently. A higher positive correlation (close to 1) means the rankings of two websites are very similar, indicating that both are targeting the same type of content or SEO strategy. A negative correlation suggests that the rankings are inversely related, meaning the content strategies of the websites diverge.

    This is valuable because:

    • It helps identify competitors’ strategies: By seeing how your rankings compare to competitors, you can assess if you are missing out on specific SEO tactics that they are employing.
    • It allows for optimization: If you find a negative correlation between your content and a competitor’s content, you may need to adjust your strategy to align better with what users are searching for.

    How can this project help improve our website traffic?

    The primary goal of this project is to ensure that your content is highly relevant to search queries that users are typing into search engines. By comparing your content with competitors’ content for specific search queries, you can:

    • Optimize content relevance: Focus on creating content that addresses user needs more effectively than your competitors, ensuring you rank higher for relevant searches.
    • Identify content gaps: Find areas where your content is lacking compared to others, allowing you to fill those gaps with additional information or insights.
    • Enhance user engagement: By offering more relevant and useful content, users will spend more time on your site, which in turn improves user experience signals and can contribute to better search rankings.

    Ultimately, better-targeted content results in increased organic traffic, as search engines will view your pages as more valuable to users, improving your website’s chances of appearing in the top search results.

    What should Website owners do after getting the results from this project?

    Once you have the similarity scores, rankings, and Spearman correlation results, the next step is to take actionable steps to enhance your content and SEO strategy. Here’s a clear path forward:

    ·         Review Top-Scoring Pages

    • Identify which of your pages scored highest for the tested query. These are the pages that best align with user intent.
    • Preserve and promote these pages. Consider improving their visibility by linking to them internally, updating metadata, or highlighting them on landing pages.

    ·         Analyze Low-Scoring Pages

    • Pages with low similarity scores are likely not addressing the tested query effectively.
    • Rework or refocus the content on those pages. Include more relevant information, improve clarity, and ensure the main topic aligns with search intent.

    ·         Benchmark Against Competitors

    • Compare your ranking order with competitors. If their pages consistently outrank yours for certain queries:
    • Study their structure, language, headings, and content depth.
    • Incorporate missing angles or perspectives into your own content where appropriate.

    ·         Identify Content Gaps

    • If your content shows low correlation or diverges from what competitors are ranking for, it might indicate you’re targeting the wrong topics or missing key subtopics.
    • Use this insight to plan new content that fills those gaps and aligns more closely with how top-performing sites approach the query.

    ·         Prioritize Based on Business Value

    • Not all queries are equally important. Focus your optimization efforts on queries that drive high-value traffic or are most aligned with your business goals.

    ·         Continuously Monitor and Iterate SEO is not a one-time fix.

    • Re-run this analysis periodically with updated content and evolving query trends to track progress and adjust your strategy as needed.

    Tuhin Banik

    Thatware | Founder & CEO

    Tuhin is recognized across the globe for his vision to revolutionize digital transformation industry with the help of cutting-edge technology. He won bronze for India at the Stevie Awards USA as well as winning the India Business Awards, India Technology Award, Top 100 influential tech leaders from Analytics Insights, Clutch Global Front runner in digital marketing, founder of the fastest growing company in Asia by The CEO Magazine and is a TEDx speaker and BrightonSEO speaker.


    Leave a Reply

    Your email address will not be published. Required fields are marked *