SEO Consultant – About

S
sergiu george

Nurdic.com is an SEO consultancy run by Sergiu George. The name as well as the initial idea originated in 2014 at Middlesex University as part of an assignment to create a digital marketing blog that turned into a thesis on SEO and then into a full-time career. Over the last decade, Sergiu has worked on both agency and client-side SEO roles in London, Berlin and Amsterdam, while developing the consultancy as a side project.

Nurdic aims to provide not as much recycled information from other sources as instructions and formulas that form an SEO Implementation Framework for WordPress, by presenting it in systematic ways, facilitated by proper structures and in-depth explanation of topics as well as practical implementation instructions.

You are, of course, welcome to use all website information in any way you see fit, whether it’s to document yourself on SEO topics at large or find SEO implementation instructions for WordPress. If, however, you prefer to have someone do it for you or consult you on the development of SEO within your organisation, feel free to reach out.

SEO Consulting Process

Despite popular belief, not all websites can greatly benefit from SEO input. As it happens, some websites serve a market which is difficult to reach through organic search, while others are not yet at the point in their development where SEO makes sense. So, before choosing any of the SEO packages, a preliminary check of the website will be carried out free of charge in order to determine if your website is eligible for SEO work.

SEO Consultant Pricing by Package

SEO Consulting Process Description

The Consulting Process itself depends first and foremost on the SEO package you decide to go for. The main difference between the packages is that the “Project” package focuses on identifying and fixing a predetermined set of issues, does not cover SEO implementation and does not require continuous commitment from either party. The “Monthly” packages require continuous commitment from both sides for a period of at least 6 months and are superior to the former in that they allow for a more granular optimisation of all aspects of SEO, including a custom Content Strategy, implementation in WordPress (optional) and monthly reporting on the progress.

1. Keyword Research

Employing a handful of main target keywords to expand the list to all possible keyword variations (short or long-tail) to target in the future. Can be market-specific or cross-markets. Can be limited to evergreen keywords.

Specifying the Search Intent of each identified keyword and providing directions on how to match the content with the search intent.

Identifying your customers’ topics of interest to target with niche product landing pages and blog posts, all based on your industry and a select number of head terms.

Identifying all organically ranking keywords of your competitors to include in the keyword research.

Grouping keyword variations together and assigning a landing page to each group.

2. Onpage Optimisation(*)

Optimisation of key onpage elements such as page titles, meta descriptions, page URLs (slug), and onpage headers.

Troubleshooting and optimisation of website taxonomy (permalinks settings), improper use of symbols in URLs, SSL encryption (migration to https), verifying the canonical domain, subdomain to subfolder migration, preventing automatically-generated duplicated pages.

Reviewing all internal links across the website (nav, in-content & footer), with account to crawl depth, and anchor text relevancy. Spotting and fixing orphan pages. Homepage internal linking optimisation.

Visual assessment of image uniqueness and originality*, assessing if any text is present as an image, properly naming and describing images (i.e. file names, titles, alt tags, longdesc, specifying which images should be png, jpeg, gif or svg (webP optional) for optimal quality and loading speeds. Testing image resolution for Retina displays. Using responsive images, enabling lazy loading. Creating image sitemaps. Proper Image attribution (CC)*, if needed. Specifying OpenGraph for Social Media. Flagging Broken Images, if any.

3. Technical Audits

Carrying out a standardised Technical SEO Audit processing over 300 common optimization checks. All checks are prioritised based on potential impact. Every issue has detailed explanations and how-to-fix info. Every issue is linked to affected URLs. Technical Audits require input from developers and additional validation checks are possible after the implementation. The Technical Audit also contains an Indexation Audit which accounts for its most important part and includes the following aspects:

  1. Flagging all 404 issues (hard and soft)
  2. Rewiring all redirects with wrongfully assigned code (i.e. 302 to 301). Fixing redirects chains and loops and broken redirects.
  3. Auditing all canonical tags
  4. Auditing all indexable vs non-indexable pages. Aligning the results with sitemaps and robots.txt.
  5. Providing recommendations for select optimisations of all non-indexed pages due to duplication or thin content to facilitate reindexing.
  6. Ensuring only https pages are indexed (not http)

4. Content Audit and Strategy

  1. The scope of the Content Strategy is defined by thorough Keyword Research. Considers the customer decision-making funnel (AIDA model).
  2. Prioritised Topic Selection and Detailed Content Briefs: Title, Headers, Intro & Conclusions, Topic Angle Selection, Internal Linking.
  3. Considers Content Scaling and Repurposing opportunities outside of Organic Search.

5. Backlink Audit and Link-Building Strategy

The Backlink Audit is a rather straightforward report that includes the current state of your website from an offsite point of view. Some of the points that are researched and described in the audit are:

  1. General Stats
    • Referring domains
    • Backlinks
    • Authors
  2. Link Velocity Analysis
    • Broken backlinks & Solution Instructions
    • Nofollow Backlinks and Solution Instructions

Unlike the Backlink Audit, the Link Building Strategy is far from straightforward. Link Building strategies can only be implemented by Digital PR Managers and, although not difficult to measure, are quite difficult to set expectations for. At this stage, the Link Building Strategy as part of the Premium Monthly SEO Package involves the selection of a number of targeted SEO tactics that would be expected to work in the individual case of the website. This will also include a list of potential publications to acquire backlinks from, along with an insight in the longterm development and management of the website authority. Some more specific aspects will include:

  1. Extended Link Velocity Analysis
    • Lost Backlinks Report split by causes and describing solutions
  2. Link Intersect for up to 10 competitors
  3. Outreach pitches to journalists for contextual backlinks (optional)

6. SEO Tracking and Reporting

Setting up Google Analytics to monitor the performance across the entire website and from all the digital channels.

Setting up and monitoring Google Search Console to track the organic performance coming from Google along with an indexation across the website.

Setting up Keyword Tracking in Ahrefs to follow the dynamics of the organic rankings in Google for the target keywords identified and selected at the Keyword Research stage.

The Inspiration for a better SEO at Nurdic

A decade in SEO is just about enough time to realise how different of an approach one can apply when optimising a website. There are many variables at play and some of them are changing on a regular basis. If there is one thing where Nurdic and the SEO process here is different than most other SEO players out there and. their own methods, it’s the commitment to an academic approach to thinking about SEO, which starts with understanding its history and using it as a basis for understanding its future.

A Brief History of Search Engines and the World Wide Web

The only way to understand the future of search engines is by looking at their evolution over time, noting the things that worked and those that failed.

Sergiu George

1990

The World Wide Web, First Website and First Search Engine

The World Wide Web, the first website and the first search engine have all appeared within the timespan of a year. Incidentally, the first search engine, a small project by a McGill University student in Montreal, was the one to precede the other two. These three concepts were tied together from the very start and have been heavily reliant on each other ever since.

The First Few Thousand Websites

At heart, the World Wide Web is the world’s collective library, and the search engine is the librarian, whose role is to:
• Understand the world’s knowledge and 
• Match it to user needs,
• Instantly!

1994

Thousands of Search Engines Failed

In the early days of the World Wide Web, the internet was ambushed by a mountain of search engines, each with its own cataloguing techniques, search algorithms, target audiences and all kinds of special features. Two decades later, it is now safe to say that all, but one of them, have failed.

Understanding the World’s Knowledge
The vast majority of early Search Engines had fairly narrow targeting, which meant you had to use different search engines for different things. All of them relied heavily on Webmasters to understand what the content was about which raised a whole set of issues, but most importantly SPAM.

2. Match it to Users Needs
All Search Engines were a mixture of Search Engines, Directories, News Sites, and Catalogues, among other things, tampering with users’ needs at every interaction. In other words, the Search Engines grew better at cataloguing the world’s knowledge, but it happened at the expense of matching it to users’ needs.

3. Instantly!
The special features and excessive advertising raised the search loading speeds. Most search engines hosted an index of websites and webpages that would sometimes take weeks or months to update. In effect, search results took a long time to load and once the search results appeared some of the content was no longer available or modified beyond the point of relevance.

The First Few Million Websites

The World Wide Web hit 1 million websites at some point in 1997. As you might expect, getting any kind of additional interesting facts about those times becomes increasingly more difficult.

1998

One Search Engine Succeeded

It wasn’t until the publication of an academic paper called ‘The Anatomy of a Largescale Hypertextual Web Search Engine’ by the muscovite Sergey Brin in 1998 which grew into today’s Google, that the idea of search engine really took flight. At first, Google was hardly any better than the other search engines, it was mediocre at understanding the content it was indexing and it wasn’t a leader in matching it to user needs, either. However, it has spread its dominance over the search engine game almost overnight, which is most often attributed to its slick user interface that had virtually no features. 

In the long haul, what allowed the equation of Google to Web Search in the minds of billions was exactly what all the other search engines failed at – that initial vision of:
• understanding the world’s knowledge 
• matching it to users’ needs
• seamlessly


The crucial factor has been their ability to understand information in increasing depth, with limited reliance on Webmasters and match it to user needs in a manner that is universal and rich and with increasing accuracy over time, through what we most commonly know today as algorithm updates to their search engine.

They delivered on the instancy aspect too, by making today’s search results virtually real-time.
In simple terms, as a result, Google has managed to accomplish what none of the other search engines did: Return a fairly relevant collection of links for almost any search query, instantly.

The First Billion Users

It might come as a shock, but it took search engines less than 10 years to get to a billion users.

2005

The Relationship Between the Search Engine and SEO

As search engines turned mainstream, manipulating them for commercial gain has become the informal definition of SEO and its outcomes widely considered to be Web-SPAM. By the year 2005, Google started to take Web-SPAM very seriously and hasn’t stopped working on it ever since. At the same time, they empowered webmasters with tools to drive commercial value through relevance to their customers as opposed to unethical practices.
• Launched Google Analytics and Google Search Console that allowed website owners to better understand the behaviour of their customers online
• Started supporting initiatives on content mark-up that aided search engines to better understand the knowledge behind the information:
▫ Sitemaps and robots.txt files to give webmasters control of what gets indexed
▫ Canonical tags and pagination to fight duplicate content and aid content-attribution
▫ Structured Data (schema.org vocabulary) to aid the understanding of semantics
• Put an emphasis on branding to increase the websites’ commitment to their online presence
▫ Started taking reviews into account
▫ Started using signals from social media
• Made the web relevant to our location and our devices
• Punished unethical SEO practices like
▫ Keyword-stuffing and over-optimisation
▫ Link-farms and paid links
▫ Duplicate content and thin content

The First Trillion Pages

At some point before 2010, Google has revealed that their index at the time was already storing some 1 trillion pages.

2010

Intent-Based Search and Rich Search Results

Although Google’s war on SPAM has continued, it also started stretching the idea of what search can accomplish, putting users’ search intent at the centre. By increasing its:
Understanding of semantics and ability to recognise context in order to
Measure user behaviour, Google has learned to understand and anticipate user intent on an individual basis and thus increase the accuracy with which they can match content to search queries.

In simple terms, returning a fairly relevant collection of links for almost any search query has, as a result, been replaced by returning the web’s single most relevant resource, which is able to fully solve the search query.

Some of the newly developed technologies:
• Autosuggest
• SERP-features and site-links
• Mobile and local
• Encryption and security, 
• Machine learning (RankBrain)
• Policy on interstitials

The First Trillion Yearly Google Searches

At a point before 2015, Google has revealed that they get at least a trillion searches per year, a number that has been growing ever since.

2015

The Advent of Real-Time

In 2015, Google announced that you’re able to get almost any data on what’s happening in real time.

Half of the World Population on Search

By 2016, according to some sources, the World Wide Web had 1 billion websites and over half the world’s population using search engines. However, as it happens, more than 80% of these websites are inactive.

2020

BERT Language Interpretation

Although BERT was introduced before 2020, it was rolled out over a number of consecutive years. Google claims that BERT has helped the search engine “understand searches better than ever before”.

Any efforts directed at a website in the public domain that are not naturally geared towards fulfilling these underlying principles, will become a SEO liability in the long haul.

Sergiu George

The Philosophy behind Nurdic

At heart, the search engine’s role is to:

  1. Understand the world’s knowledge and 
  2. Match it to user needs
  3. Instantly

The technology that powers the Search Engine has changed many thousand of times over, but the core philosophy established by that first handful of visionaries and reinforced by Google in particular – remained intact. Thus, the best way to define SEO is by reverse-engineering how any given website feeds these 3 aspects of a search engine.

1. understand semantics match intent

The Query Fulfilment is the single mission of any given search. The Search Engine goes to great lengths at estimating the ranking of the most relevant resources for fulfilling that query. However, the website has the absolute power in determining whether that query will, in fact, end up being fulfilled. In an imperfect world, in order to understand knowledge, search engines look at website content and match it to user needs through search queries leading to Search Engine Results Pages.

2. understand content match query

Understanding the world’s knowledge and matching it to user needs is a task difficult enough, but it becomes even more so when both aspects are constantly changing. Search Engine Algorithm Updates are designed to either increase the search engine’s depth of knowledge or accuracy of matching it to user needs by getting a tighter grip on context and user behaviour.

3. increase depth and accuracy

Algorithm updates that aim to primarily increase the search engine’s depth of understanding of information are looking at website’s context and content, in order to get a better grasp on semantics. 

4 increase content context semantics depth

Algorithm updates that aim to primarily increase the accuracy of matching knowledge to user needs are looking at User Behaviour and Search Queries in order to anticipate Search Intent.

5 increase accuracy intent query behaviour

So, if we reverse this vision of a search engine to define the scope of SEO, we obtain the practice of understanding your customers in increasing depth in order to serve their knowledge needs with increasing accuracy. Thus, the role of the website is first and foremost to understand its user in order to serve the right semantics by investigating their search-behaviour, search-queries and search-intent.

6 understanding intent query behaviour

Secondly, it’s about actuating the website’s wider context, content and semantics in a manner that allows fulfilling user queries and intent with increasing accuracy over time to drive their behaviour.

7 matching intent query behaviour

Lastly, it’s about recognising that the rankings, SERPs, and algorithm updates are merely a delayed validation of past SEO efforts as opposed to a practical guide to setting the optimisation needs.

8 validating index serps algorythms