Reading: 0118 322 4395 | Manchester: 0161 706 2414 | Oxford: 01865 479 625 | info@sharpahead.com | Office hours: Monday-Friday 9:00am - 5:30pm

 | Office hours: Monday-Friday 8:30am - 5:30pm

 | Email  | Office hours: Mon-Fri 9:00am - 5:30pm 

Tips for Better B2B Research with ChatGPT-4o

Prompts, Elephants and Hallucinations

The latest iteration of ChatGPT – ChatGPT-4o – can now browse the web and carry out live web searches.  This is a game changer for generative search in B2B applications. I explained why in our recent blog. 

On this page:

I highly recommend that anyone with an interest in B2B search marketing tries out the new ChatGPT-4o. If you fancy giving it a go, here are some hints and tips that I and the team at Sharp Ahead have picked up from our testing. 

Before we start: ChatGPT is evolving quickly. These tips make sense today, but they might be obsolete in a few months’ time. I hope they are a useful starting point but please use common sense and your own judgement as you find your way around ChatGPT-4o. 

A reminder: although some of my tips are applicable to older versions of ChatGPT, I don’t recommend the older versions for B2B research. You should use ChatGPT-4o and not the older versions. This means that you will most likely need a paid ChatGPT account which currently starts at US$20/month. (Though there is now some limited access to ChatGPT-4o via the free account tier.) 

We need to talk about Chat 

Tip 1: expect to use multiple prompts 

We’re all used to conventional search engines like Google and Bing. Fire in a few carefully chosen search words, hit Enter and doom-scroll until you find a useful link. That one-shot habit won’t bring out the best of ChatGPT. 

Instead, remember that your interaction with ChatGPT is a conversation in which you will iterate towards the best result using multiple prompts. There’s no pressure to get the right result straight away. You can expect to use at least two or three follow-up prompts to shape the results the way you need them. 

Here’s how I used three prompts to build and refine a shortlist of possible coworking spaces: 

My sequence of prompts went like this: 

  1. What are some of the best coworking spaces in Oxford?
  2. Which ones are dog friendly? 
  3. Put them in a table with pros and cons for each workspace 

Why is ChatGPT like an elephant? 

Tip 2: be ready to start a new chat 

When you are working with multiple prompts it’s important to remember that ChatGPT never forgets – until you tell it to. This is a big difference from a conventional search engine, where your previous queries will have little or no impact on the results from a fresh query.  

 
Sometimes one of your initial prompts will take ChatGPT down a rabbit hole. Don’t expect it to escape. This is dangerous in B2B research because the content of an early prompt might bias the end results of an extended chat in ways that you don’t realise. For example, in one of our tests I asked ChatGPT for “office space providers similar to WeWork”. Even though I tried to broaden the context with subsequent prompts, ChatGPT’s answers remained highly skewed around WeWork for the remainder of the chat. 

Fortunately, there’s a simple answer here. Just start a new chat whenever the previous one is stuck in a rabbit hole: 

You don’t lose anything by doing this. ChatGPT keeps your old chat in a separate thread, and you can switch back to it if you need to cross-reference. If you don’t want to retype a complex prompt, you can quickly copy/paste it from the previous chat. And you can switch back to and continue a previous chat if you need to. 

Making a judgement about whether to persevere with your current chat or when to start afresh is one of the key skills you’ll need to develop to get the best from ChatGPT. Be alert to the signs of a conversational rabbit-hole, such as when keywords from much earlier in the conversation continue to appear with high frequency even when you are trying to broaden things. In my personal experience with ChatGPT, I found it almost always better to start a new chat, and to adjust my prompts in the light of what I’d learned from the previous chat, than to try to rescue a chat that had gone astray. 

You Want Whipped Cream and a Cherry on the Top?  

Tip 3: be demanding about the end result  

ChatGPT has a huge repertoire of skills for collating, formatting and organizing results. These can be a massive time-saver. When you are confident that ChatGPT is giving you the answers you need, use a specific prompt to get them in the exact format you want. 

I often find asking for a table of results gives a very useful output. For example, here’s a prompt you can use to structure a shortlist of potential suppliers: 

  • Can you create a table with a price comparison of their services, which includes hot-desking, short and long-term rental terms, meeting room costs and any other options such as reception, printing etc. Please include their websites and contact details too. 

ChatGPT can do a lot more than just formatting text. For instance, if you’re working with numerical data you can ask it to plot a graph. You might not discover these capabilities at first. Be ambitious in your prompts and ask for what you want, even if you’re worried it might be beyond the capabilities of ChatGPT. It might surprise you! Worst case, you can always just start a new chat. 

Where did you get THAT from?!? 

Tip 4: ask for sources and explanations 

ChatGPT-4o will often volunteer information about its sources. So, you might see website links or other attribution within its output. But if this isn’t present, ask for it via a follow-up prompt. For example: 
“What are your sources for that information?” Or What are your criteria for choosing those results?”

Asking for sources and explanations is important for two reasons. 

Firstly, the information might be directly helpful for you, for instance you might want to read the full text of the original article that ChatGPT used as input to its response. Or you might want to instruct ChatGPT to use different criteria. 

Secondly, these follow-up prompts will help you spot when ChatGPT is hallucinating. All generative AI tools are prone to “hallucinations”, that is, just making s*** up when they don’t have enough information. ChatGPT-4o is much better than previous models in this respect, but it can still fall into the trap of creating a fictitious, people-pleasing answer when it doesn’t have any other way to generate a response. That’s a disaster if you plan to use the output for some serious business purpose. If you ask for sources, you can make a much better judgement about the likely reliability of ChatGPT’s answers. 

In my example, you’ll see that ChatGPT’s sources are somewhat unreliable!  

And indeed, it turns out many of these coworking spaces are NOT dog-friendly. Beware the AI people-pleaser. 

The Latest and Greatest 

Tip 5: force a web search / web browsing 

ChatGPT-4o can search the web and can browse live web pages. But it may choose not to do so if it doesn’t think live web data is necessary to answer your questions. So once again: use a follow-up prompt to force ChatGPT to get current data from the web. 

For example:  

“Please check the suppliers’ websites for the latest pricing information and include that in your response” OrPlease search the web for up-to-date information on this”

Here’s an example of this in use for my coworking space research: 

Come on in, the water’s… mostly lovely  

Tip 6: just try it 

You’ll learn more about ChatGPT in an hour of hands-on practice than by reading a hundred blogs. So dive in and give it a go. Be demanding and put it to work with some challenging, meaningful use cases. 

You’ve nothing much to lose except a bit of time, and a lot to gain if you find it as useful as we have. There are just a couple of caveats: 

  • You will need a paid account. But the cost is pretty minimal given the potential value. 
  • There are some privacy and confidentiality implications. For example ChatGPT can, in principle, use the prompts that you provide to train future models, which could lead to your prompts appearing in the output for other users. If you are planning to use proprietary, confidential or personal data in your interactions with ChatGPT you should be very careful. You may need to consider a different way of using generative AI for applications that work with this type of data. 

We’d love to hear from you

I hope we’ve given you some impetus to explore ChatGPT for your own B2B research applications. We’d love to hear your experiences, good or bad, and any tips of your own that you are willing to share. Please use the comments or get in touch! 

On this page:

Subscribe

    ChatGPT-4o enters the generative search race

    Generative Search Runners and Riders

    There’s a new player in generative search for B2B – and it’s not a search engine 

    On this page:

    I’ve been tracking the evolution of generative search as a tool for B2B research for a year now. To date, I’ve looked at the generative search offerings from the established search engines Bing and Google. But there’s a new contender – ChatGPT-4o from OpenAI.  

    ChatGPT has learned to use a live connection to web search. And that’s a game-changer. 

    As ChatGPT itself rather modestly puts it “I can help you find the most current information by browsing the web if needed”: 

    Why was ChatGPT so poor for B2B research in the past? 

    I didn’t include ChatGPT in my previous research on generative search as it was clear it just couldn’t be a good tool for most B2B research. That’s down to the way ChatGPT is built. It uses a language model that’s built from a fixed set of training data. This brings in limitations of both recency and scope: 

    Recency

    LLMs are built and trained using data from a fixed moment in time. For instance in the case of ChatGPT 3.5, the training process finished in early 2022. ChatGPT 3.5 simply doesn’t know anything about the world after that point, and never will. New versions of the language models are released from time to time but there will always be a time lag between the LLM’s latest model and current reality. For most B2B research that lack of recency is a showstopper – there’s no point drawing up a shortlist of suppliers based on the market dynamics of two years ago, or making a choice of product based on outdated pricing or feature information. 

    Scope

    AI models are built using impressively large sets of training data, but they can’t encompass every single fact from the whole internet. A lot of B2B research is about very detailed, niche questions – such as “what’s the cheapest dog-friendly flexible office in Milton Keynes?”. ChatGPT and other LLMs will never be able to encompass all of those facts within their training sets. So if you ask them a question like that, you’ll find that either they don’t know the answer, or – worse – they will “hallucinate” and invent a plausible but non-factual answer. Neither of those is helpful for a B2B research task. 

    So if you used ChatGPT 3.5 for B2B research, you’d have obtained an out-of-date, incomplete and quite possibly non-factual answer. Not an attractive option! 

    A quick note on ChatGPT naming and pricing 

    Before we go too much further: there are three main “versions” of ChatGPT in play as of today: 

    • ChatGPT-3.5 was the first version to really impact the mass market. It’s still available via a free tier of ChatGPT’s usage model. It was trained on data up to early 2022. It does NOT support the live web links that I’m focussing on today. 
    • ChatGPT-4 was a major new version released in March 2023 and trained on data up to September 2021. It’s only available as a paid tool with subscriptions costing from US$20 per month. 
    • ChatGPT-4o (the “o” stands for “omni”, reflecting its ability to work with different modes of communication such as images and audio) was rolled out around May 2024 and is again a paid tool available via the same subscription as ChatGPT-4. It has a lot of improvements over previous versions including substantially faster response times. 

    If you don’t mind stumping up the US$20/month for a paid account, I can’t see any reason to ever prefer one of the earlier versions. ChatGPT-4o seems superior to its predecessors in every meaningful way. 

    So when did ChatGPT become a search engine? 

    ChatGPT-4o (and indeed its predecessor ChatGPT-4, but NOT the free ChatGPT-3.5) have a lot of different capabilities to link intelligently to other technologies outside the core LLM. One is the ability to carry out a live web search and access other live web data. That is – ChatGPT recognises that it needs data from outside its training set in order to answer a question, creates the relevant web search, browses some of the live pages in the search results and pulls back the results from the search engine and integrates those results into the conversation. Here’s an example: 

    See how ChatGPT has decided to search for “best coworking space in Oxford UK 2024” in response to my question? And then it has fetched data from the sites that appear in the search results, and integrated all of that into its answer: 

    If I want, I can click on that search icon in the chat history and see the details: 

    I guess in principle ChatGPT could use any search engine for this purpose, but in all of the examples I’ve seen it uses Bing. That’s not surprising given the close commercial relationship between Bing’s owner Microsoft and ChatGPT’s owner OpenAI. 

    Does this approach look familiar? It’s basically the same process used by Bing Chat (now renamed Copilot). So we’re seeing a convergence of sorts between ChatGPT as a general-purpose AI-powered assistant, and the AI-powered search engines like Bing Copilot and Google SGE. 

    How does ChatGPT perform as a B2B “search engine”? 

    This all sounds exciting but it’s time to bring out my favourite question about a new technology: “is it any use?”. I’ve put ChatGPT-4o through its paces using my previous research methodology. As a reminder, this uses a standard set of 12 realistic B2B research tasks split across different industries and different stages of the research/buying journey. 

    The short summary: ChatGPT-4o is good. Really good. The experience is a step change relative to any generative search system I’ve tested before. Here are a few highlights from my testing. 

    ChatGPT-4o makes intelligent choices about when and how to use external search and external browsing 

    I showed an example above where ChatGPT-4o took my prompt “what’s the best coworking space in Oxford?” and immediately went to Bing with “best coworking space in Oxford UK 2024”. That’s smart – “Oxford UK” to avoid any confusion with other places, and “2024” in recognition of the implied recency in my question.  

    ChatGPT-4o is smart – sometimes! 

    Compare these answers to my test question “how should I split my budget between Google Search and LinkedIn Ads?”. The first is from Bing Chat back in April 2023: 

    It’s not bad. There’s a suggested numerical answer to the question and a clearly-quoted source. There’s some additional information about the technical details of budgets on the different platforms, which is correct but irrelevant to my original question. But here’s how ChatGPT-4o answered that same question:

    Now that’s a longer, much more thoughtful answer. Instead of just “use a 50:50 split”, ChatGPT-4o has taken a broader interpretation of my question and has outlined the strategy one should use for deciding on and monitoring the best split. And it’s a GOOD answer! I’d be happy to receive an answer like that from an expert B2B PPC practitioner. 

    ChatGPT-4o responds well to follow-up prompts. This is two-edged. If you try to use ChatGPT-4o like a traditional search engine, where you put in a single query and then scroll through a bunch of answers, you will sometimes be disappointed. But if you are willing to learn the habit of using a few follow-up prompts, you’ll get much better results. And it’s not hard to learn some good prompting habits. 

    Here’s an example from my research: 

     I found the follow-up prompt “What were your sources for the last answer?” incredibly helpful. I also found that including the sentence “Please check your information is current” within the prompt would usually encourage ChatGPT-4o to check relevant websites. 

    ChatGPT-4o is fast. Logically this shouldn’t really matter. If a tool saves me an hour of manual research, whether I wait 5 seconds or 50 seconds for the output isn’t a big deal. But previous generative search tools did feel a little sluggish sometimes. ChatGPT-4o has trimmed its response times to the point where it is lot easier to remain fully engaged with the results as they appear. There are still occasional pregnant pauses, but overall the experience feels much more conversational and the end result is more pleasant at a human level. This is especially valuable when follow-up prompts are needed – which they often are. I think most B2B researchers will feel this is a big deal. 

    ChatGPT-4o still hallucinates. I’ve written before about the “hallucination” issue with generative search. My testing suggests that hallucinations are rarer and more subtle in ChatGPT-4o than in previous generative search tools. But they still exist, and in niche searches they have potentially dangerous consequences. For instance when I used my test question “Is Sharp Ahead a good B2B digital marketing agency?”, ChatGPT-4o referenced a very plausible, but totally imaginary, positive online review. The saving grace here is that ChatGPT-4o will reference its sources, so it’s easier to fact check its answers. 

    There are no ads, and no evidence that ChatGPT-4o sees Bing Ads. Even when ChatGPT-4o has pulled in search results from Bing that would normally include paid ads, the results that appear in ChatGPT-4o show no trace of the paid content. 

    There are still a few obvious bugs. For instance, I saw a few examples of formatting problems when I asked ChatGPT-4o to put its results in a table.

    And you’ll see that one of the suggestions for “best alternatives to WP Engine” is, erm, WP Engine. 

    But these bugs were minor and didn’t significantly undermine my overall confidence in the tool or the usefulness of the results. 

    The learning curve for ChatGPT-4o is shallow and enjoyable. I learned a lot about ChatGPT-4o in just a few hours of methodical testing. I wasn’t frustrated or annoyed at any point. Sometimes the tool didn’t do what I wanted, but I was able to quickly pick up techniques that improved the results. The rapid response time helped here, I’m sure. 

    I’d like to share a few more of my tips and tricks about how to get the best from ChatGPT-4o as a B2B research tool, but that will have to wait for another blog. (Or get in touch if you want to chat!) 

    What might this mean for B2B marketers? 

    I’ve already written some of the possible implications of generative search for the future of B2B marketing here. I think everything I said then still holds true, and the emergence of ChatGPT as a credible B2B search engine introduces another possibility – if ChatGPT is successful, high intent B2B search traffic might disappear from conventional search engines altogether. 

    ChatGPT is currently offered as a paid tool. Its use is funded by subscriptions, not by advertising. If that stays the same and if ChatGPT is able to win market share from Google, Bing and the other established search engines, paid search might become a much less important part of the B2B marketing mix in future. And SEO will need to change to target ChatGPT’s special use cases. 

    But will ChatGPT actually displace the existing search engines? In part that will depend on how good a job Google and Bing do with their own generative search experiences. And those are improving all the time. 

    So it’s time for me to take a fresh look at those and see how the latest evolutions of generative search from Bing and Google compare with the new ChatGPT kid on the block. It’s going to be particularly interesting to compare ChatGPT-4o with the latest developments from Bing – is it better to add generative AI to a search engine, or to integrate a search engine within a generative AI tool? Stay tuned for an update in a future blog post! 

    If you are interested in how Generative Search might impact your B2B marketing strategies, or if you need help with any other aspect of your B2B digital marketing, please get in touch. We offer a free, no-obligation 30-minute consultation.

    On this page:

    Subscribe

      How long does a B2B website last?

      Subheading

      We’re sometimes asked questions like “our website is only 2 years old, is it reasonable to replace it?”. Or equivalently, “if we invest £Xk in a new website this year, how long before we need to spend similar money again?”.  

      On this page:

      The fundamental question behind both of those is: how long should one expect a new B2B website to last? 

      This isn’t an exact science. But in our opinion, a sensible target lifetime for a B2B website is between 3 and 5 years. 

      It’s OK to be risk-averse about commissioning a new B2B website

      A new B2B website is a significant investment in both time and money for any business. 

      The total cost is likely to consume a significant chunk of your annual marketing budget. And let’s be honest – B2B website projects are HARD WORK. The time and energy needed to bring a new website to life is a big drain on any B2B marketing team and puts pressure on relationships with wider stakeholders. The opportunity cost of all of that is very substantial. 

      [And sometimes website projects fail altogether, so some or all of that investment might never deliver any return at all. But that’s a topic for another day!] 

      Given the cost and uncertainty, most B2B organisations will, quite rationally, be cautious about committing to a new website. There’s often a choice between investing in a new site or carrying on with the current website.  

      A long-lived website is good for the budget 

      Think of a B2B website as a capital investment – like a new computer or other equipment. The longer it lasts, the more value it will provide in return for the up-front cost of building it. A website that costs £120k to build and that lasts for 5 years costs £2K a month over its lifetime. If instead it lasts for a year, that cost rises to £10K a month. 

      If you can commission a new website that will last for, say, 6 years rather than 2 years, you’re effectively reducing its cost by two thirds. That’s a massive saving. 

      Why do old websites need to be retired?  

      There are a few different reasons why an old B2B website might be “too old” to carry on. It’s worth taking a bit of time to dig into these. Not least because, in trying to understand what “kills” a B2B website, we can identify some things that could help build a more future-proof website with a longer lifetime – and the associated financial benefit.  

      There are a few reasons why a website might need to be replaced. Some of the most common: 

      • Technical obsolescence – the CMS or other critical aspect of the site’s technology platform is out of date, support is no longer available, or there is no viable migration route to a supported platform. The site is at risk of irrecoverable technical failure and possibly exposed to security vulnerabilities. There’s no feasible option except to replace the site.
      • Rebranding – your company undergoes a rebrand. The design and content of the old site won’t be consistent with the new branding. If the rebrand isn’t too radical it might be possible to adjust the existing site. But if it’s a major rebrand, it probably won’t be economically feasible to rework the existing site. It will be cheaper to replace it. 
      • Business change – something significant changes in your business, perhaps as a result of an acquisition or merger, or as a response to other strategic drivers. The story that you want to present to the outside world changes substantially as a result. The old site doesn’t reflect that, and it’s not cost effective to rework it. A new site that reflects the new positioning of the business is needed. 
      • Content structure/workflow changes – the site still works and looks fine for users, but the “back end” systems that the team uses to update the site no longer allow it to be maintained in a safe and cost-effective way. If you’re lucky, the site’s technical platform might be flexible enough that it can be adjusted for new content structures and workflows. But if the changes are substantial, it’s probably time for a new site that’s built around your new content structure and workflows.  
      • Design obsolescence – the site’s been around a long time. It still works, and could still be updated in incremental ways, but the look and feel and user experience are dated, and it is increasingly difficult to present new content in ways that users expect. It’s often a nuanced decision to replace a website for this reason: there’s no single moment at which a design becomes “too dated”, so it will always be a judgement call whether to replace the website or carry on for a while longer. 

      What impacts the website lifetime?

      Let’s look at how some of those factors might impact the website’s lifetime. 

      Technical obsolescence 

      In the early days of the internet there was a very fast pace of technical change, and web technology platforms became dated very quickly. Nowadays the pace of change is a bit more predictable. But there is still a lot of change. Think about this timeline for WordPress, for example. WordPress as a platform goes back to 2003, but it wasn’t really a sensible choice for a professional website until around 2013. Today it’s a great choice for quite a wide range of website projects and can robustly handle even some very complex requirements (like multi-language support). But we can see some trends that might lead to WordPress being a less desirable platform choice in the future, with platforms like Webflow and Wix offering some advantages. 

      So WordPress has already lasted 10 years or so as a mainstream platform, and we can be pretty confident it will be around quite a while longer. 

      In contrast there are a lot of technical platforms that don’t have so much longevity. For instance, if you’d chosen the Nucleus CMS which in 2012 was touted as a serious alternative to WordPress, you’d have seen it become obsolete back in 2014. 

      We’d say that if you build a website today on a mainstream, widely supported technology platform you can count on at least a 10-year lifetime. If you chose a more niche platform for some critical component of your website, you run the risk of much earlier obsolescence. So make this technical platform lifetime assessment part of your decision-making about a new website. You might have valid business reasons for selecting a new or niche technology platform, but if you do, recognise that that choice risks limiting the working lifetime of your new site. 

      Rebranding and business change 

      A rebrand is rarely spontaneous, so your company’s leadership team probably has a good idea of the likely timescales for any future rebranding. If you can be confident there’s no major rebranding in the offing, this probably won’t limit the lifespan of your website. But if your brand is ageing or if there are other reasons why a rebrand is likely, you should assume you’ll need to replace your website when that rebrand happens. 

      You might hope to build a website in such a way that it can be rebranded as an update, without completely replacing the site. Sometimes that’s possible, especially if the rebrand is an incremental change. But often a change in branding will undermine the overall design coherence of a website. It’s best to assume that new brand = new website. 

      So: if you are expecting a rebrand within a year or two, it’s probably best to defer any commitment to a new website. 

      Business change is harder to predict. Some organizations and industries are naturally fast-moving and subject to a lot of change – for instance there may be a lot of mergers and acquisitions. Other industries don’t have the same pace of change. You could look back over the history of your own organisation and similar companies in your industry to make an estimate of how likely it is that a future business change will come along and require a website replacement. For a lot of industries this might set an average 5-10 year lifespan. 

      Content structure/workflow changes 

      These sorts of changes are often the main internal driving factor for a website replacement. The old site still works, but the marketing team can no longer update it in the way that they want to, or updates are time-consuming and difficult in a way that limits what the marketing team can achieve. 

      There can be a lot of reasons why a mismatch develops between desired working practices and the structure and functionality of an existing website. Perhaps your team has adopted a new marketing technology like a marketing automation system. Perhaps there’s been a business change or competitive pressure that requires a new type of content, such as product videos where previously static images were sufficient. Or you’ve moved into new geographic markets and need content in multiple languages. 

      No one sets out to build a website that won’t support future content requirements and associated workflows, but it’s hard to anticipate these types of changes too far into the future. You can mitigate the impact of future changes by choosing a mainstream technical platform and building in some flexibility to the backend setup of any new website. But it’s not possible to build in unlimited flexibility to anticipate every possible future requirement, and flexibility comes at a cost – if you insist on a lot of flexibility in the back end of a new website, you’ll pay more for the build and you’ll have more complexity to handle day-to-day, which increases your cost of ownership. 

      You’ll perhaps have an idea about the likely pace of future change within your team. Are you early adopters that like to bring in new marketing technologies whenever possible? Are your competitors and your industry very dynamic and constantly bringing in new types of content? Or are things more slow moving? 

      Realistically, for most organisations it’s likely that some form of content or workflow change will start to limit the lifetime of a website within perhaps 2-5 years.  

      Design obsolescence 

      Design trends evolve gradually over time – so a website from 2015 just “looks” a bit dated, even if it still works fine. And there are specific challenges that arise from external technology changes. For example, in 2015 there was arguably no need for a B2B website to support mobile browsing, whereas today that’s essential. And changes in legislation have forced technical changes around cookie consent.

      It’s hard to predict whether there will be future technology step changes that will make today’s designs obsolete. Perhaps the way that we interact with websites will change fundamentally because of generative AI. Or some new device like Apple’s Vision Pro will become mainstream and rework the browsing experience. There’s not much we can do today to anticipate uncertain future step changes in design best practices.  

      The creeping sense of a site becoming “dated” is a bit easier to predict though. It’s a judgement call, but I’d say it’s realistic to expect around a 3-5 year lifetime for a B2B design. After that time, the chances are your site will start to signal its age to your users. 

      You can increase the likely lifetime of your new site by taking this into account in the design process. Most B2B teams will be somewhat conservative in their attitude to design, but you can still aim for a fresh, modern design to avoid limiting the lifetime of your site. There are some evergreen design principles – simplicity always ages better than complexity, for example! 

      Here’s to a longish and happy life (for your website) 

      So you’ll see why we suggest a 3-5 year target lifetime for a new B2B website – the clock starts running on design obsolescence even before your shiny new site goes live, and content/workflow changes are an inevitable fact of life in the digital marketing world. They will start to bite after a few years, and after 5 years the chances are high that either design obsolescence or content/workflow changes will become so limiting that a new site is essential.  

      This has a few implications when you are planning a website: 

      • You should use that 3-5 year target lifetime in your financial evaluation of the investment case. 
      • It’s worth thinking and planning for contingencies like rebranding and business change that might happen within the first 1-3 years of a new website’s life. 
      • It’s not really worth worrying about other contingencies beyond 3-5 years.  

      In particular the 3-5 year timescale is a good one for assessing your technical platform choices. Can you be confident that the core technologies underpinning your new website will still be around and fully support in 5 years’ time? If so, that’s as good as forever for planning purposes. 

      If you need a new B2B website, or if you want help extending the lifetime of your existing site, or just a conversation around the issues we’ve raised in this blog, we’d love to hear from you – get in touch! 

      On this page:

      Subscribe

        How best to use Google Search Console for B2B marketing

        GSC Best Practice and Governance

        Last time, we introduced Google Search Console (GSC) and explained its importance for B2B digital marketers. In particular, we explained how GSC is not just for SEO specialists, but also offers a valuable safety net for detecting and managing technical and content issues.

        On this page:

        In this follow-up article we look at some of the best practices and governance that you should put in place around GSC, to ensure your B2B marketing team is set up to take advantage of the important features it offers.

        TL;DR – the next sections on user access management and governance are IMPORTANT, especially in a larger or fast-changing organisation. But it’s not the most gripping subject matter. We won’t be offended if you want to skip ahead. But don’t say we didn’t warn you if you have a PR crisis that needs some urgent content deletion and can’t find anyone with the right GSC permissions! 

        GSC user access management 

        If you’ve followed the process in the last article, you’ll have GSC set up and available to your organisation. But you still need to make some choices about which people will have access and how that access will be managed and controlled. 

        For starters, everyone who logs in to GSC will need to do that via a Google Account. We’ll assume you understand about that for the purposes of this article. (A reminder that you can register an existing, non-Gmail, company email address as a Google Account if you need to.) 

        If you work as part of a team, you’ll need to decide whether each person in the team will use their own individual Google Account (like sam.brown@yourcompany.com) to access GSC, or if you will share a Google Account (like marketing@yourcompany.com).  

        Your organisation may already have policies about this. Typically, we recommend individual user accounts, but note that you will then have to apply some proactive governance to the list of users. 

        For a simple tool, GSC has quite a complex user access model: 

        • Verified owner: an account that has directly established its ownership of the website (via one of Google’s verification methods). You must have at least one verified owner. Verified owners have full access and full control over all of the functionality of a property in GSC – including some things that could be quite damaging in the wrong hands (like the right to delete content from Google’s index)! A verified owner has the ability to invite other users to become delegated owners, full users and restricted users. 
        • Delegated owner: an account with the same powers as a verified owner, but which is not directly verified with Google. Delegated and verified owners can also allow other users to become Full and Restricted users of a GSC property. 
        • Full user: has access to most functions, but not some of the most powerful and dangerous ones. 
        • Restricted user: can read most of the data in GSC but can’t access most other functions. This level of access might be appropriate for, for instance, an SEO agency partner. 

        One becomes a verified owner by verifying a property with Google, and one becomes one of the other user types by being invited by an owner. 

        Note that Google will allow you to become a verified owner of an EXISTING property by following the verification process again – even if there is already another verified owner. 

        So, if disaster strikes and you lose access to your GSC account, you can use the verification process to get back in. 

        GSC user governance 

        The best approach to GSC user governance will depend on your organisation’s exact structure and setup. But here are some general considerations. 

        • Remember that GSC is a powerful tool that not only contains commercially sensitive data but also provides access to Google functions that are capable of doing significant damage to your organisation’s online presence if misused. Bear this in mind when deciding who should have access at which level, and how to manage that list of users over time. 
        • Your organisation’s IT, legal, or HR teams may already have policies about user access to tools like GSC and Google Accounts. If so, it’s best to stick to them! 
        • It is pretty much essential to have at least one verified owner at all times. Otherwise you won’t have full access to everything in GSC. (There are some things that only a verified owner can do.) You might choose to have two or more verified owners, as a contingency. If you accidentally lose your last verified owner, you can repeat the Google verification process to reinstate a verified owner or set up a new one – but this might be time-consuming and you might need access to systems like your DNS that could be controlled by someone outside your immediate team. So best not to rely on that. 
        • If your organisation’s public profile is important, remember that GSC access might be needed urgently in a technical or content emergency – like a PR crisis or if your website is hacked. You should make sure that at least one person with owner access (verified or delegated) is available pretty much any working day, and you might want someone with this access on call out of hours. You’ll need to allow for holiday and sickness cover when you choose your list of users. 
        • You will need to proactively add and remove access for team changes (new starters, leavers etc). 
        • You might need to grant access to GSC to people outside your organisation. For example, an SEO agency might need access. Make sure that access is removed if the agency relationship ends! 

        GSC email alerts 

        By default, GSC will send out email alerts for a range of issues to each GSC user. You can opt out of some or all of these emails (User settings -> email preferences). Care is needed here: these opt-outs apply across ALL of your GSC properties. So, if you manage more than one website in GSC and turn off alerts for one of your properties, you’ll also turn them off for all the others. 

        GSC email alerts are useful. Google is pretty sparing with them – most notifications are about something important, and some are about VERY important things (notably, if Google thinks your site has been hacked). We’d recommend allowing all email alerts. It’s worth setting up a rule in your email system to put them into a folder. 

        If you have several GSC users, each user will receive every email alert. That’s great for making sure someone gets the message but creates ambiguity about who should take action. So that’s something to address in your governance processes: 

        • Who should receive GSC alerts 
        • Who should take action, for each type of alert 

        It’s also important to check that your GSC email alerts are actually reaching their target inboxes. You don’t want a critical security alert to be missed because of an over-zealous spam filter! 

        Checking GSC reports: how often is…often? 

        We’d like to recommend that you look at some of your GSC reports daily. But let’s be honest, we’re all busy people. There may not be time in your daily routine for extra monitoring tasks, and if your business is relatively slow-moving (like many B2B businesses), daily checks of GSC aren’t absolutely critical. And there are other things in GSC that definitely don’t need frequent checks.  

        We suggest the following guidelines: 

        Check “often”: daily if you can, or weekly, or at least a couple of times a month. 

        Check “regularly”: once a month or so, at least once every three months. 

        Check “occasionally”: ideally every three months or so, and at least once a year. 

        Things to check “often” in GSC 

        Some examples of things to check every day, or every few days, if you can! 

        • Any critical alerts e.g. for malware – these can be covered by a suitable process for monitoring of GSC’s email alerts, but if you are not confident of your email monitoring setup you can always see any alerts within the GSC interface. 
        • Look at page and search term reports to see if there have been any sudden, unexpected changes – e.g. a sudden surge in searches for your brand name. This might indicate an underlying PR or reputational issue.  
        • Review MoM keyword data by linking GSC to your regular reports (e.g. customised Looker Studio reports).  Grouping keywords into themes and being able to see ‘at a glance’ trends can help you identify new opportunities and address concerns where rankings may take a sudden dip. 
        • If you are creating or maintaining content for SEO purposes, check that any newly created content is correctly appearing in the Google index. If you have new content that is time-sensitive, consider using GSC to manually submit the new URLs for indexing, to make sure that Google picks them up as quickly as possible.  

        Things to check “regularly” in GSC 

        If SEO is important to you, then “regularly”, i.e. monthly or thereabouts, is a good cadence for a detailed retrospective SEO analysis, looking at search terms and volumes and their trends over the past few weeks and months. 

        But even if SEO isn’t a focus for your team, there are a number of things in GSC that should be checked from time to time. These things are still important, but they shouldn’t change very often – so checking every few weeks or so should be enough for most B2B business. Some examples: 

        • Look at the breakdown of your search appearance by country. Are there any new geographic patterns? Does the pattern of geographic interest still align with your business’s commercial priorities? 
        • Look at page and search terms reports and see if there are any anomalies. For example, you might see an “unimportant” page suddenly pop up as getting a large number of search impressions.  
        • Check that your verified owners are still verified! Verification isn’t a once-and-for-all process, rather Google will check from time to time that the verification conditions are still met. Verification status might be lost because of an accidental site content change or over-zealous tidying of DNS settings. 
        • Refresh your user governance. Are there enough users with each level? Have you removed any users who have left the company or who otherwise don’t belong? 
        • Check that coverage of the Google index matches your intended content map and that no important content is missing. 
        • Also check that there’s nothing in the Google index that SHOULDN’T be there! For example, if you use a subdomain for specialist landing pages, you probably want to exclude those pages from Google search results. But Google’s crawler is pretty inquisitive, and it might have found those pages, unless you have the right settings in place! GSC reports will show you if the wrong subdomains have crept into the Google index. (Note you’ll need to verify a DOMAIN property for this to work.) 
        • Sometimes there are notifications from GSC about technical issues that need action, but which can be tolerated for a while. For example, if GSC reports that some of your pages have invalid structured data (like the Event metadata we discussed last time), you might decide that you can live with that for a while. But you don’t want to ignore them forever. Perhaps a fix can be included in your next scheduled website update sprint, for example. This monthly-ish checkpoint is a good time to review any of those longer-standing technical issues and make sure you have an appropriate action plan in place. 

        Things to check “occasionally” in GSC 

        Examples of tasks that can be done just once in a while. 

        • Download and archive any GSC data that you might want to use for long term tracking or trend analysis. GSC only keeps 16 months’ worth of data! So if you want to refer back to this year’s search performance data in, say, two or three years’ time, you will need to archive it. GSC has flexible features for exporting and downloading its data. You’ll need to put in place some governance about how the exports are stored and archived for your team’s future use. 
        • Spend a bit of time on a long term analysis, perhaps comparing this year’s search performance with last year’s (if you remembered to archive the previous report!) and looking at long term trends and changes. 

        Extra GSC checks for special projects 

        If you are making a big change to your web presence – for instance, launching a new website, switching to a new CMS or carrying out a mass reorganisation of content – you should proactively use GSC around that change to ensure that Google’s view of your digital presence isn’t compromised.  

        Check BEFORE you make the change – to establish a baseline. For example, keep a record of how many pages are present in the Google index. 

        Check again 1-2 days AFTER you make the change – to make sure that nothing has gone immediately wrong. You should be able to see some of your new content making its way into the Google index. 

        AFTER the change KEEP checking every 1-2 days until you are fully confident that Google’s view of your new setup matches your intentions. For example, are all of the pages indexed at their new URLs (and no longer present at their old ones)? Some changes may take a few weeks to completely run their course through Google’s indexing system, so keep checking until you are sure all is well.  

        A closing thought on GSC-related roles and responsibilities 

        We hope we’ve shown you that GSC is an important tool and given you an idea of the sorts of processes you should have in place. With appropriate processes, all the excellent information that Google provides in GSC will help you improve your digital presence and mitigate a lot of technical and content-related risks. 

        The roles and responsibilities around GSC sit oddly in many organisations. Indeed, the old name for GSC was “Google Webmaster Tools”, referencing the peculiar technical/marketing hybrid role of “webmaster” from the early days of the commercial internet. Should GSC be “owned” by the marketing team? By IT? By PR or legal? 

        We think it’s still an issue today that functionality that is potentially crucial to an organisation – like early detection of a hacked website, or speedy removal of content from Google’s index during a PR crisis – sits in a rather obscure technical tool that doesn’t have an obvious organisational owner.   

        It’s great for marketers to build connections across organisational boundaries. So a final thought on GSC – is there anyone else in your organisation, outside the marketing team, who needs to know about it and to be included in the associated governance processes? There might be a chance for you to make a new connection with your colleagues in PR, IT or legal. 

        If you need help with Google Search Console or any other aspect of your B2B digital marketing, please get in touch. We offer a free, no-obligation 30-minute consultation.

        If you’d like to stay up to date with the latest marketing changes and best practices, sign up to our B2B digital marketing newsletter.

        On this page:

        Subscribe

          The impending death of third-party cookies – what does it mean for B2B marketers?

          The Cookieless Future

          Third-party cookies have been endangered for some time. But now it’s official – the third-party cookie will go extinct in 2024. It will finally take its place in the museum of obsolete internet technologies, alongside the dial-up modem, Netscape Navigator and MySpace.

          On this page:

          An update 

          As we confidently predicted in this blog, third party cookies will finally, and for definite, be phased out in 2024. This was however, dependent on there being no issues with the competition authorities.  

          And as it turns out, there are issues with the competition authorities! So, third party cookies will survive a little while longer. Google have released an update for more context here or read on to find out what this means for B2B marketers.

          Google’s stated timeline has changed from “before the end of 2024” to “not before the end of 2024”. Which the technically minded amongst us will note could be an extension as short as one minute… but most likely, a bit longer.  

          Nothing really changes here: third party cookies are on their way out, and our advice on what B2B marketers need to do to prepare still stands. We’ll keep you posted as and when we hear more. 

          Introduction

          Third-party cookies have been endangered for some time. But now it’s official – the third-party cookie will go extinct in 2024. It will finally take its place in the museum of obsolete internet technologies, alongside the dial-up modem, Netscape Navigator and MySpace.

          Why does this matter for B2B digital marketers?

          In a nutshell: because most types of remarketing, and some other intent-based targeting techniques, just stop working without third-party cookies. So some B2B digital campaigns that have performed well in the past will have to be reworked or abandoned.

          The ad platform operators are being a little coy – for perhaps understandable reasons – about exactly what the impacts will be and exactly what replacements and alternatives will be offered. So we B2B advertisers face some uncertainty in the coming months. Let me guide you as best I can through what’s happening, when it will happen, and what you should plan to do about it.

          Disclaimers: as always, we offer this information in good faith but nothing in this article should be construed as legal advice. And there are a lot of uncertainties here and things might change (for instance if regulators intervene). I’ll do my best to be definitive, but you should read everything that follows with an implicit “My best guess at the moment is…”.

          What is changing with third-party cookies, and when?

          Google has announced that the Chrome browser will block third-party cookies “in the second half of 2024”. So we don’t yet have a specific date, but I guess the latest possible date consistent with that announcement is 31st December 2024.  

          In the meantime, Chrome is already blocking third-party cookies for a random 1% of Chrome users. This is to allow everyone to understand the impact of third-party cookie blocking and provide time for a transition. 

          Chrome is the dominant web browser worldwide with around 65% of the market. Safari (18%) and Firefox (3%) already block third-party cookies. I can’t find a definitive statement from Microsoft about the Edge browser (5% market share) but, since it shares the underlying Chromium technology with Chrome, it’s likely Edge will do the same as Chrome. So Google’s pending change to Chrome will mean we bid adieu to third-party cookies from all the mainstream web browsers, representing more than 90% of the world’s web browsers, by December 2024 (and possibly sooner). They may linger in a few obscure or out-of-date minority browser platforms, but they’ll be dead for all practical purposes come 2025. 

          Will the sky fall? Are there any massive technical impacts?

          Nah. 

          There are a few scenarios where in the past a third-party cookie might have been necessary for functioning of some website component. But let’s face it: Safari has been blocking third-party cookies by default since 2020. Other web browsers (including Chrome) have offered the option to block them for many years. If there were a technical problem with your website when third-party cookies were blocked, you’d have heard about it by now. There might be some subtle behaviour changes from some components of your tech stack, but nothing major is going to stop working with your site’s core functionality. 

          Of course that all assumes that you are a business using your website as a marketing channel for your own products and services. (What the online ad industry calls an “advertiser”.) It’s a different story if you are a “publisher” (providing ad-funded content) or, worse, an online ad network (providing the tech for advertisers to buy advertising and publishers to sell it). Then, yep sorry but for you, the sky is falling. See below. 

          And another situation that will impact a minority of B2B marketers: if your organisation runs multiple websites on multiple domains that need to share user data via cookies for some functional purpose – for example, so that a single user profile can be used across multiple sites – then you will need to take steps. There are some special technologies (e.g. “Related Website Sets” or RWS) that are intended for these situations, and a bunch of associated complexity and uncertainty. Details here.

          So just what ARE the impacts?

          For most of us as B2B marketers, the main impact is that some types of remarketing, and some types of interest-based targeting, won’t work anymore. Depending on how the different ad platforms choose to handle the transition, the affected campaigns will either stop working altogether, or serve their ads in a more limited set of locations, or – arguably the worst scenario – quietly change their behaviour so that they target a wider audience of mostly the wrong people. 

          Some types of targeting can survive. For example, contextual targeting – where an ad is shown based on the content of the page containing the ad – can still work within the scope a single web page. And targeting that takes place entirely within a single logged-in platform (like a social platform or behind a premium publisher’s paywall) won’t have to change. But run-of-internet remarketing/retargeting – the ads that follow you around the internet and remind you of previous website visits – and interest-based targeting can’t work without third-party cookies. And nor can some types of display advertising that are targeted using a third-party profile (offered by some social platforms, for example). 

          In case it seems arbitrary why some types of targeting are going away and others are staying, it might help to understand WHY these specific techniques are reliant on third-party cookies. So let’s have a technical interlude. 

          What are third-party cookies (and why are they necessary for remarketing)?

          I’m gonna do my best here to be punchy but NGL – this is technical and obscure. I won’t be offended if you skip to the next section!  

          Let’s start with FIRST-party cookies. They’re bits of data that are stored by a web browser in response to a request from the site that the browser is viewing. That same website can read and write the cookie data. Other websites can’t. (Hence “first-party” – the cookie belongs to the same website that the browser visited.) Typically, a first-party cookie is used to retain some state that is important to the website across pages or across visits – for example, the contents of a shopping basket.  

          So if I visit AcmeAlternativeArchitects.com, I might receive a first-party cookie like this: 

          And if I separately visit BetterB2bBrowsing.com then I might receive a separate first-party cookie like this: 

          Other websites can’t access the data in the first-party cookie, the browser doesn’t allow it. So first partycookies can’t be used for sharing data across sites: 

          Now you might already have a clue how third-party cookies are different. When a web browser visits a site, the site may ask for data to be stored in, or read from, a THIRD-party cookie. That cookie data is sent to a different site that may have no relationship at all with the site that is being visited.  

          So here I am on AcmeAlternativeArchitects.com and that site uses a third-party cookie on BetterB2bBrowsing.com. Data can be sent to, and read from, that cookie like this: 

          Note that I’ve not deliberately visited BBB.com at all here. As the user of the browser I probably don’t even know that website exists. Access to BBB.com and the exchange of data in the associated cookie is hidden away in the background of my browsing session.  

          Subsequently, I visit CheapClassicConcrete.com and I’m a bit surprised to see an ad for AcmeAlternativeArchitects! That remarketing strategy is possible because the two websites I visited, AAA.com and CCC.com, are behind-the-scenes sharing data about my browsing history via the third-party cookie on BBB.com, like this:  

          Remember – in this scenario I’ve never explicitly visited BBB.com and I probably don’t even realise that it exists. 

          In today’s online advertising environment there are a LOT of different ad networks who are all interested in profiling and tracking users. So many websites set and read a LOT of different third-party cookies. This is what leads to privacy notices like this one: 

          Because third-party cookies can share data about my browsing history across multiple sites, they can be used to drive some powerful marketing strategies, including: 

          1. Remarketing, where I see an ad for a website that I previously visited (perhaps even for a specific product from that site); and 
          2. Intent-based targeting, where my browsing history is analysed in more general terms (for instance, to infer that I might be interested in booking business travel to Australia) so that advertisers can show me products or services related to that trip. 

          You can perhaps see why third-party cookies give rise to privacy worries. 

          Google’s upcoming change to Chrome will act to stop this data sharing by changing the way the browser works. So in the future, requests to read a third-party cookie will just be blocked: 

          Although existing third-party cookies will still hang around on users’ browsers, they won’t be accessible and there won’t be any way to use them to target advertising (or, indeed, for any other purpose). 

          What will happen to our ad campaigns when third-party cookies are blocked?

          Remarketing and other campaigns that rely on third-party data on display ad networks will not be able to work in their current form once third-party cookies go away. 

          There might be replacement technologies of a sort. At the time of writing there are ideas around to offer something a bit like display remarketing, for example, but without the use of cookies. These ideas are still evolving, and the ad platform companies haven’t yet been explicit about how they will handle the transition to any new technical approaches. They’ve some understandable reasons for that: 

          • Some ad technology companies (like some specialist third-party ad networks) may fear they will be very badly affected by this change, to the point where their business model may no longer be viable. They may be hoping for a reprieve (e.g. if the competition regulators intervene) and certainly don’t want to scare off their customers prematurely. 
          • Other ad platforms (notably the big platform/audience owners like Google, LinkedIn and Meta) won’t be so badly affected, and may even benefit in some cases (because e.g. remarketing inventory will shift inside the “walled gardens”), but they are keeping their options open about exactly how they are going to handle the transition until the regulatory aspects are clearer. 
          • Some technology companies are working on alternative ways of targeting ads that will work without third-party cookies. Google in particular, with some partners, is actively working on alternatives to third-party cookies via an initiative called “Privacy Sandbox”. But no one is being very forthcoming with details here. Perhaps that coyness reflects a desire to avoid premature scrutiny by privacy campaigners and regulators for what are likely to be somewhat controversial approaches. 

          Will Privacy Sandbox maintain the status quo?

          Erm, probably not. At least for B2B marketers. 

          You can read about Privacy Sandbox here. At the moment, details are scarce. Quoting from the FAQ (as of the date of this blog article): 

          So a lot of ambition and aspiration, but precious little detail so far. 

          In early iterations of Privacy Sandbox, Google has trialled a number of technologies including “FLOC” (Federated Learning of Online Cohorts). These use aggregate data (combining the browsing patterns from many individuals) to avoid individual privacy concerns. Another approach is called “Topics” and involves a form of contextual targeting, where the content of the visited web page is used to infer a user’s interests in a generalised, privacy-friendly way. But there doesn’t seem to be any certainty yet about the exact techniques that will finally emerge from the Privacy Sandbox to “replace” third-party cookies. 

          Without more detail I can only speculate at this point. But I can’t see how Privacy Sandbox will achieve its stated objectives of preserving individual privacy and still allow the sort of fine-grained tracking that is needed for remarketing to niche B2B audiences.  

          I can believe that some privacy-preserving technique working with aggregate or modelled data could do a reasonable job of predicting mainstream consumer behaviours – for example, is a person in market for a winter sun holiday, or for a new saloon car. But the niche, specialist nature of most B2B purchases is going to be too difficult. There are millions of different B2B product and service categories, and in some cases only a few hundred individuals in market for any given category at any given time. I think it’s an intractable data science problem to model and predict all of those behaviours, without using any third-party data, in any way that is useful for ad targeting. 

          So while I’d love to be proven wrong, I think it is likely that whatever emerges from Privacy Sandbox will be of limited value to B2B marketers. Don’t expect anything like a pin-for-pin replacement for third-party cookies. Remarketing to anonymous visitors is going to die along with the third-party cookie. 

          What steps should I take now?

          First off, audit your current use of remarketing and intent-based targeting campaigns and figure out which ones are going to break when third-party cookies die. Get the best from them while you can. And be ready to stop those campaigns at short notice. 

          Secondly, think about alternative strategies that you might be able to use, for example: 

          • It will still be possible to target audiences based on first-party data (subject to appropriate consent). For instance your list of email subscribers can be used to build an audience for display advertising on many social platforms. Can you make more use of this? Can you grow your email subscriber base, for instance by offering a newsletter? 
          • Keep an eye on Privacy Sandbox and similar initiatives. Perhaps credible techniques that are useful for B2B will emerge. (But as I explained above, I’m not optimistic on this.) 
          • Search remarketing (“RLSA”) will probably still work in a more limited form. Can you make better use of this? 
          • Remarketing strategies that can be implemented entirely within a social platform will still work. For instance if someone views your videos on LinkedIn, you can remarket to that person ON THE LINKEDIN PLATFORM.  
          • Contextual targeting techniques that target ads within the scope of a single web page, based on the content of that page, will still work. And we might see some innovation here from the ad networks. Be on the lookout for new contextual ad strategies for B2B use cases. 
          • Some less mainstream B2B targeting techniques use IP addresses instead of cookies. (Typically to target users on the internal network of specific companies.) These will continue to work. If the targeting options align with your campaign objectives, you might  be able to use these approaches. 

          You might notice a theme here – internet-wide display networks won’t be able to offer remarketing, but the search engines and social platforms will still be able to offer remarketing within their own “walled gardens”. So an unfortunate consequence of the demise of the third-party cookie is a shift of power in favour of the large platform owners like Google, Meta and LinkedIn.   

          First-party data strategies – just good old-fashioned B2B marketing

          It’s arguable that third-party cookies have made B2B digital marketers a bit lazy. In a world where our ads can follow anonymous prospects around the internet, we didn’t need to try so hard to build relationships. Those days are ending. 

          However you feel about the demise of strategies that rely on third-party cookies, it’s no bad thing to brush up your first-party data strategies. Having an explicit, opted-in marketing relationship with your desired audience – based on data that you own and control – is valuable in so many ways.  

          Think about these aspects of your marketing mix: 

          • Do you have an email newsletter with compelling content? Do you promote it strongly at the top-of-funnel touchpoints? And via appropriate social channels? 
          • Do you have compelling “lead magnet” content that is good enough to be gated, so that it can capture contact details with marketing consent? 
          • Do you make good use of offline strategies to build your subscriber base? For example, if you use outbound telemarketing, can those calls be used to gather email subscribers? 

          Even quite small changes, for example making a newsletter signup function a little more prominent on your homepage, can over time make a big difference to the growth of your first party data. And in the cookieless future, the quantity and quality of our first party data will increasingly determine our success or failure as B2B digital marketers. So it’s a great time to bring fresh energy and focus to your data strategies. 

          Need help?

          If you’re worried about the impacts of this change on your B2B digital marketing strategies, please feel free to get in touch. And watch this space – we’ll keep you updated via this blog and our newsletter as we learn more about some of the areas that are currently uncertain.  

          Some FAQs

          Q: Does this mean I don’t need to worry about cookie consent anymore? 

          A: Absolutely not. GDPR and similar laws worldwide still apply to cookie use and you need to take the necessary steps to be compliant. Your site will still be setting and reading first party cookies, and they need consent. And even if Chrome and most other mainstream browsers are blocking third-party cookies, you still need consent for any legacy or niche browsers that work with third-party cookies. However you COULD take advantage of the fact that third-party cookies are going away to simplify your consent handling somewhat – there’s no need for those cookie warnings about “we and our 258 advertising partners value your privacy” if you’re no longer sharing cookies with advertising networks. So consider removing those advertising tags, and stripping back your related privacy messages, once third-party cookies go away. For information on cookie compliance see our blog article.

          Q: Will Google Analytics and other tracking technologies stop working? 

          A: No. Google Analytics and most other sensible and ethical tracking technologies have used first-party cookies for a long time. They won’t be impacted by the disappearance of third-party cookies.  

          Q: Will programmatic and ABM strategies be impacted? 

          A: Yes. These terms cover a number of different technical approaches. So they will be impacted to different degrees when third-party cookies are retired. You’ll need to look at the details of individual programmatic and ABM campaigns to assess the likely impact. But it’s likely there will be a significant impact. 

          Q: What about advertising and targeting within apps, not websites? 

          A: It’s different. And it’s complicated. Too complicated to get into here. In general, apps are less regulated and less dependent on browser functionality. So they are likely to be less impacted by Google’s pending change. 

          Q: I’m not sure whether any of my existing campaigns are going to be impacted by this change. Can I safely just leave them running? 

          A: We can’t be sure about that yet. In the past, some ad platforms have made “stealth” changes to targeting options that have made running campaigns behave in very wasteful ways – for example this change. So I don’t think it’s safe to assume that your existing campaigns can be left to run. We’ll be keeping a close eye on how the ad platforms handle this change. Watch this space. 

          Sign up for our B2B digital marketing insights to stay up to date with the latest remarketing changes and best practices.  If you want to find out more how to optimise and test your B2B digital marketing campaigns or have any questions about B2B digital best practice, get in touch!

          On this page:

          Subscribe

            B2B Digital Rocket Fuel
            straight to your inbox

            Add your email address below to receive our biweekly newsletter and stay up to date with the latest B2B digital marketing news and insights.

            You'll also get instant access to our growing catalogue of marketing resources.

              “An invaluable resource for getting the latest and greatest ideas and tips on B2B digital marketing. My students also benefit from the industry insights”.

              Louize Clarke, Founder, The Curious Academy