Articles Blog Engineering

Visual Prompting: LLMs vs. Image Generation

We’ve been trying a lot of different things in Project Cyborg, our quest to create the DevOps bot. The technology around AI is complicated and evolving quickly. Once you move away from Chat Bots and start making more complicated things, like working with embeddings and agents, you have to hold a lot of information in your mind. It would be nice to visualize this info.

Visual prompting is what we were looking for, and it’s more complicated than I expected.

Visual Prompting for Image Generation

I’ve been working with LLMs and text generation almost exclusively in AI. I haven’t had much need to use image generation. The tech is really interesting, but not very useful for creating a DevOps bot. However, I did hear about Chainner, a visual composer for image generation. Its interface will be familiar to you if you’ve worked with a node-based shader editor before.

chaiNNer - work4ai
Example of Chainner for images

This is a really cool way of working with image generation LLMs. Instead of working in Python to create images, you can work on them visually. Some things just make more sense mapped out visually. This could help us to mentally simplify some of the complex tasks we’re dealing with. This made me wonder: could I modify Chainner to work with LLMs?

Chainner for LLMs

Chainner doesn’t have anything built-in for LLMs. I’m not really surprised. However, it is well designed, and so it wasn’t very difficult to see how I would implement it myself.

I started with a simple LLM node. Here’s a sample from the code:

class PromptNode(NodeBase):
    def __init__(self):
        self.description = "This is a node for making LLM prompts through an agent"
        self.inputs = [
                option_labels={k: k.value for k in LLMOptions},
            BoolInput("Use Google"),
            BoolInput("Use Vectorstore"),
            DirectoryInput("Vectorstore Directory", has_handle=True).make_optional(),
                "Embeddings Platform",
                option_labels={k: k.value for k in EmbeddingsOptions},
        self.outputs = [

        self.category = LLMCategory = "LLM Agent"
        self.icon = "MdCalculate"
        self.sub = "Language Models"
LLM node example

With that working, I moved on to creating a node for a Vectorstore (aka embeddings).

class VectorstoreNode(NodeBase):
    def __init__(self):
        self.description = "This is a node for loading a vectorstore"
        self.inputs = [
                "Embeddings Platform",
                option_labels={k: k.value for k in EmbeddingsOptions},
            DirectoryInput("Vectorstore Directory", has_handle=True),
        self.outputs = [

        self.category = LLMCategory = "Load Vectorstore"
        self.icon = "MdCalculate"
        self.sub = "Language Models"
A vectorstore node

You get the idea of the workflow. At the end of my experimenting, I ended up with a sample graph that looked like this:

An agent example

Roadblocks for Chainner

It’s about here that I had to abandon the experiement.

It was looking cool, and I liked the concept. There was only one problem: it wasn’t going to work, not as Chainner was designed. I don’t want to get too deep into the weeds on it, but there’s a dependency issue. We’re using self-hosted embeddings on some of our vectorstores in Project Cyborg. That means that we’re using open-source AI models for some of the embeddings. To do this, we’re spinning up spot-instances on Lambda Labs. One of the python libraries you need to run self-hosted embeddings only works on Unix-based systems (shoutout to file-path faux pa). That’s not a problem if you’re working in VSCode or a command line. It is a problem when you need to run an app with a GUI on Windows.

There are also some other problems with the visual scripting in general that stopped me from pursuing it further.

Other Visual Prompting Solutions

The day after I decided to stop pursuing the Chainner option, Langflow was released.

Look familiar?

Langflow is a visual interface for Langchain. So, basically exactly what I was doing. It is very new, and under development, but it does some things very well. If you’re looking to create a simple app using an Agent, and you don’t know python, Langflow gives you an option. It doesn’t currently support exporting to code, so it has limited usage in production. You could treat it as interactive outlining.

It does highlight the biggest problem currently with visual prompt engineering: you still need to have a strong understanding of the systems at play. To even use Langflow, you have to understand what a zero-shot agent is, and how that interacts with an LLM chain, and how you would create tools and supply them to the agent. You don’t really gain a lot in terms of complexity reduction. You also lose so much in terms of customization. Unless you customize your nodes so that you can provide every single perimeter that the background API supplies, you have to create tons of separate, very similar nodes. For LLMs, visual graphs are only very useful for small tasks.

Ultimately, the existing solutions serve a purpose, but they don’t really reduce the cognitive load of working with LLMs. You still need to know all of the same things, you just might be able to look at it in a different light. With everything changing so fast, it makes more sense for us to stick with good, old-fashioned programming. Hopefully, visual prompting will catch up, and be useful for more than image processing and chatbots.

Recent Posts

Articles Blog Engineering

How to take the brain out of the box: AI Agents

An AI Agent at work answer questions ChatGPT can't
An AI Agent at work answers questions ChatGPT can’t

Working with LLMs is complicated. For simple setups, like general purpose chatbots (ChatGPT), or classification, you have few moving pieces. But when it’s time to get serious work done, you have to coax your model into doing a lot more. We’re working on Project Cyborg, a DevOps bot that can identify security flaws, identify cost-savings opportunities in your cloud deployments and help you to follow best practices. What we need is an AI Agent.

Why do we need an agent?

Let’s start at the base of modern AI: the Large Language Model (LLM).

LLMs work on prediction. Give an LLM a prompt, and it will try and predict what the right answer is (a completion). Everything we do with AI and text generation is powered by LLMs. GPT-3, GPT-3.5 and GPT-4 are all LLMs. The problem with this is that they are limited to working with initial training data. These models cannot access the outside world. They are a brain in a box.

You have a few different options depending on your use-case. You can use fine-tuning, where you undergo another training stage. Fine tuning is excellent, and has a lot of use cases (like classification). It still doesn’t let you use live data. You can also use embeddings. This lets you extend the context length (memory) of your AI to give so that it can process more data at once. Embeddings help a lot, but they don’t help the LLM take action in the outside world.

The other option is to use an AI agent.

What is an Agent?

Here’s the simplest definition:

An AI agent is powered by an LLM, and it uses tools (like Google Search, a calculator, or a vectorstore) to interact with the outside world.

That way, you can take advantage of the communication skills of an LLM, and also work on real-world problems. Without an agent, LLMs are limited to things like chatbots, classification and generative text. With agents, you can have a bot that can pull live information and make changes in the world. You’re giving your brain in a box a body.

How can we do this? Well, I’m going to be using Langchain, which comes with multiple agent implementations. These are based on ReAct, a system outlined in a paper by Princeton University professors. The details are complicated, but the implementation is fairly simple: you tell your AI model to respond in a certain style. You ask them to think things through step by step, and then take actions using tools. LLMs can’t use tools by default, so they’ll try and make up what the tools would do. That’s when you step in, and do the thing the AI was trying to fake. For example, if you give it access to Google, it will just pretend to make a Google Search. You set up the tools so that you can make an actual Google Search and then feed the results back into the LLM.

The results can seem magical.

Example: AI Agent with Google Search

Let’s start with a simple agent that has access to two tools.

from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.llms import OpenAI
# We'll use an OpenAI model (Davinci by default) as the "brain" of our agent
llm = OpenAI(temperature=0)

# We'll provide two tools to the agent to solve problems: Google, and a tool for handling math
tools = load_tools(["google-search", "llm-math"], llm=llm)

# This agent is based on the ReAct paper
agent = initialize_agent(tools, llm, agent="zero-shot-react-description", verbose=True)

while True:
    prompt = input("What would you like the agent to tell you (press CTRL+C to quit)?")

These agent examples look the best in video form:

Example: AI Agent with Access to External Documents (Vectorstore)

Here’s another example that uses a tool to pull information about Azure. I converted the official Azure documentation into a Vectorstore (aka embeddings). This is being used by Project Cyborg so that our DevOps bot can understand best practices and the capabilities of Azure.

tools = [
        name = "Azure QA System",
        description="useful for when you need to answer questions about Azure. Input should be a fully formed question.",

Here’s it in action:

AI Agents make LLMs useful

Chatbots are cool. They are very useful for many things. They can’t do everything, though. Most of the time, your AI will need access to live info, and you’d like for it to be able to do things for you. Not just be a very smart brain that can talk. Agents can do that for you. We’re figuring out how we can use them here at Electric Pipelines. If you want help figuring out how agents could help your business, let us know! We’d be happy to talk.

* indicates required

Recent Posts

Articles Blog Engineering

What does AI Embedding have to do with Devops?

AI embeddings are powerful. We’re working on Project Cyborg, a project to create a DevOps bot.

There’s a lot of steps to get there. Our bot should be able to analyze real-world systems and find our where we could implement best practices. It should be able to look at security systems and cloud deployments to help us better serve our customers.

To that end, our bot needs to know what best practices are. All of the documentation for Azure and AWS is available for free, and it’s searchable. However, online documentation doesn’t help with problem solving. It only helps if you have someone capable running a search. We want to be able to search based on our problems and real-world deployments. The solution: embeddings.

AI Embeddings

Here’s the technical definition: Text embeddings measure the relatedness of text strings.

Let’s talk application: embeddings allow us to compare the meaning of sentences. Instead of needing to know the right words for what you’re searching for, you can search more generally. Embedding enables that. 

Embeddings work by converting text into a list of numbers. Then, those numbers can be compared to one another later, and similarities can be found that a human couldn’t detect. Converting text to embeddings is not terribly difficult. OpenAI offers an embedding model that runs off of Ada, their cheapest model. Ada has a problem, though.

Ada has a memory problem

Ada is a powerful model, and if it can keep track of what it’s supposed to be doing, it does excellent work. However, it has a low context length, which is just a fancy way saying it has Alzheimer’s. So, you can’t give Ada long document and have it remember all of it. It can only hold on to a few sentences in its memory at a time. More advanced models, like Davinci, have much better memory. We need a way to get Ada to remember more.


We’ve been using langchain for a few different parts of Project Cyborg, and it has a great tool in place for embedding as well. It has tools to split documents up into shorter chunks, so that Ada can process them one at a time. It can then store these chunks together into a Document Store. This acts as long-term memory for Ada. You can embed large documents and collections of documents together, and then access them later.

            By breaking it up documents into smaller pieces, it allows you to search your store for chunks that would be relevant. Let’s go over some examples.

You can see the document (data-factory.txt) and the different chunks (5076, 234, 5536) it’s pulling from for the answer
In this case, it pulls from multiple different documents to formulate an answer

Here you can see that we ask a question. An AI-model ingests our question, and then checks its long-term memory, our document store for the answer. If it knows the answer, it will reply with an answer and reference where it got that answer from.

Fine-Tuning vs. Embedding

Embeddings are different from fine-tuning in a few ways. Most relevant, embeddings are cheaper and easier to run, both for in-house models and for OpenAI models. Once you’ve saved you documents into a store, you can access them using few tokens and with off-the-shelf models. The downside comes in the initial embedding. To convert a lot of documents to an embedded format, like we needed to, takes millions of tokens. Even at low rates, that can add up.

Fine Tuned usage is significantly more expensive across the board

On the flip side, fine-tuning will typically use far fewer tokens than embedding, so even though the cost per token is much higher, it can be cheaper to fine-tune a model than to build out an embedded document store. However, running a fine-tuned model is expensive. If you use OpenAI, the cost per token is 4X the price of an off-the-shelf model. So, pick your poison. Some applications can take advantage of the higher initial cost, but then the cheaper cost of processing later.

Recent Posts

Articles Blog

Take AI Seriously: It is Foundational

AI (Artificial Intelligence) is a rapidly advancing technology that has the potential to revolutionize a wide range of industries, from healthcare to finance to manufacturing. While some people may view AI as a toy or a gimmick, it is actually a foundational technology that is already transforming the world in significant ways.

AI is foundational because it enables new capabilities and innovations that were previously impossible. For example, AI-powered systems can analyze vast amounts of data in real time, identify patterns and anomalies, and make predictions that are more accurate than those made by humans. These capabilities have already been applied to a wide range of applications, from speech recognition and natural language processing to autonomous vehicles and medical diagnosis.

AI and the Internet

AI has revolutionized the way we interact with the Internet by making it more personalized and intuitive. With the help of AI, websites and apps can analyze our behavior and preferences and provide us with customized experiences. For example, Amazon’s product recommendations and Netflix’s content suggestions are both powered by AI algorithms that analyze our browsing history and viewing habits. In addition to personalization, AI has also had a significant impact on search engines, which are an essential part of the internet. Search engines use AI to provide users with more accurate and relevant search results. AI algorithms analyze user behavior, such as search history and click-through rates, to improve search results and ensure that users find what they are looking for quickly and easily.

A person on their laptop searching the Internet.
Microsoft is trying to take over search.

Microsoft Plans Domination

Microsoft’s move to challenge Google’s dominance in search is significant and could have a profound impact on the AI industry. For years, Google has been the undisputed leader in the search engine market, with its search algorithm being one of the most advanced and sophisticated in the world. However, Microsoft’s AI-powered search engine, Bing, is rapidly gaining ground and is now the second-most popular search engine in the world.

Microsoft has incorporated GPT (Generative Pre-trained Transformer) technology into Bing search through a feature called “Advanced Answers.”

Advanced Answers is a feature in Bing search that helps it understand what you’re looking for when you type in a question. It does this by using a language program that has read a lot of text, so it can give you a better, more natural answer. This means Bing can give you an answer even if it’s not written down exactly on a webpage or in your search.

Electric Pipelines’ Bot Project

We take AI seriously at Electric Pipelines. We recognized the potential of it immediately. Interested in a relevant use case of AI, we began developing a DevOps bot to automate our customers’ operations. With AI becoming increasingly integrated into the digital landscape, it is more important than ever to have a solid foundation in place. This includes robust security measures to protect against cyber threats and the ability to scale to meet the demands of an ever-increasing user base. A strong digital foundation is essential for businesses to stay competitive in the rapidly evolving digital landscape.

What the Future Holds

The future of AI is exciting, and its impact on the internet is only going to increase in the coming years. AI will continue to transform the way we interact with the internet and make it more intuitive and personalized. In the search engine industry, the use of AI will become even more advanced, enabling search engines to provide more accurate and relevant results. As businesses continue to adopt AI technology, it will become even more integrated into our daily lives. As the use of AI technology continues to grow, it is essential for businesses to stay up-to-date with the latest advances and ensure that they are taking advantage of the benefits of AI while avoiding the pitfalls.

Final Thoughts

In conclusion, AI is a crucial component of the internet and has revolutionized the way we interact with digital technology. Its impact on the search engine industry has been particularly significant, with AI algorithms providing more accurate and relevant results to users. However, it is important to approach AI technology with caution and respect, considering its ethical implications. As the use of AI continues to grow, it is essential to have a solid foundation in place to build secure, reliable, and scalable systems. In the end, it is crucial for businesses to stay up-to-date with the latest advances in AI technology, while also taking a responsible and ethical approach to its use. By doing so, we can harness the power of AI to make the internet a better and more personalized place for everyone.

Recent Posts

Articles Blog Engineering

Using Classification to Create an AI Bot to Scrape the News


We’re hard at work on Project Cyborg, our DevOps bot designed to enhance our team to provide 10x the DevOps services per person. Building a bot like this takes a lot of pieces working in concert. To that end, we need a step in our chain to classify requests: does a query need to go to our Containerization model or our Security model? The solution: classification. A model that can figure out what kind of prompt it has been given. Then, we can pass along the prompt to the correct model. To test out the options on OpenAI for classification, I trained a model to determine if news articles would be relevant to our business or not.

Google News

I started by pulling down the articles from Google News.

from GoogleNews import GoogleNews
start_date = '01-01-2023'
end_date = '02-02-2023'
search_term = "Topic:Technology"

This way, I can pull down a list of Google News articles with a certain search term within a date range. Google News does not do a good job of staying on topic by itself.

The second result Google News returns here already moves away from what we searched for

So, once I have this list of articles with full-text and summaries, I loaded them into a dataframe using Pandas and output that to an Excel sheet.

for i in range(2,20):
    # Grab the page info for each article
    # Create a Pandas Dataframe to store the article
An example of some of the Google News data we pulled in

Fine-Tuning for Classification

Then comes the human effort. I need to teach the bot what articles I consider relevant to our business. So, I took the Excel sheet and added another column, Relevancy.

The updated spreadsheet had a column for relevancy

I then manually ran down a lot of articles, looked at titles, summaries and sometimes the full text, and marked them as relevant or irrelevant.

Then, I took the information I have for each article, title, summary, and full-text, and combined them into one column. They form the prompt for the fine tuning. Then, the completion is taken from the relevancy column. I put these two columns into a csv file. This will be the training set for our fine-tuned model.

Once I had the dataset, it was time to train the model. I ran the csv through OpenAI’s data preparation tool.

OpenAI’s fine tuning data preparation tool make sure your dataset is properly formatted for fine tuning

I got out our training dataset and our validation dataset. With that in hand, it was time to train a model. I selected Ada, the least-advanced GPT-3 model available. It’s not close to ChatGPT, but it is good for simple things like classification. A few cents and half an hour later, I have a fine-tuned model. 


I can now integrate the fine-tuned model into my Google News scraping app. Now, it can pull down articles from a search term, and automatically determine if they are relevant or not. The relevant ones go into a spreadsheet to be viewed later. The app dynamically builds prompts that match the training data, and so I end up with a spreadsheet with only relevant articles.

A table with Google News articles only relevant to our company

Recent Posts

Articles Blog

AI Can Help Small Businesses Compete

Photo by Scott Graham on Unsplash

The internet has become an integral part of our daily lives, providing us with access to information, entertainment, and communication on a scale that was once unimaginable. However, as we rely more on the internet, Big Tech has dominated the online landscape. It squeezes out competition and leaves consumers with fewer choices and less control over their online experience. In this blog post, we will explore how this concentration of power among a few companies is affecting the internet and its users, and discuss how AI has the potential to level the playing field. Small businesses can finally compete with Big Tech.

Google logo
Google is afraid to compete with small businesses.
Google logo

Google’s Evolution

Google was founded in 1998 by Larry Page and Sergey Brin while they were Ph.D. students at Stanford University. They built the company on the principles outlined in a 1998 paper they wrote. In their paper, they proposed a new way of organizing the world’s information using a system they called “PageRank.” The paper stated that “advertising funded search engines will be inherently biased towards the advertisers and away from the needs of consumers.” (emphasis mine) In the early days, Google’s search algorithm was designed to rank websites based on the number and quality of links pointing to them. This delivered more relevant and useful search results.

However, over time, as Google has grown and evolved, they have become the model they warned about in their paper. Advertisers are king in Google’s current mode of search. The top of the page belongs to the highest bidder, regardless of its value to the customer.

Can AI Level the Playing Field?

Artificial intelligence (AI) has the potential to level the playing field for small businesses and startups. They can then compete with larger companies on a more equal footing. By harnessing the power of AI, small businesses can gain access to the same capabilities as larger companies. For example, they can use AI to analyze customer data and provide insights that can inform business decisions, automate repetitive tasks, and improve customer service. AI-powered chatbots and virtual assistants can help small businesses provide 24/7 customer support, so they can keep pace with larger companies that with more resources at their disposal.

AI can help automate and optimize various business processes such as inventory management, marketing and sales, and financial management. It can enable small businesses to operate more efficiently and effectively, which in turn can lead to increased competitiveness and growth. In short, by embracing AI, small businesses can gain a competitive edge and achieve greater success in today’s fast-paced, technology-driven world.

A battle scene, representing small businesses competing with Big Tech.
Big Tech and small business battle to control AI

Let the Battle Begin

It is reported that Google and other large tech companies are worried about the competition that AI will bring. Big Tech tends to buy what it can’t compete against and then lock everything down. However, with AI, it is becoming increasingly difficult to control and monopolize the technology.

The battle to control AI is heating up, and it’s uncertain who will come out on top. Many new startups and smaller companies are emerging as key players in the field, thanks to the democratization of AI tools and resources. There are many copywriting AI’s, like Jasper that are already disrupting the status quo and bringing new innovation to the market. The race is ongoing, and it will be interesting to see how the big players will adapt to the competition. Some companies may choose to collaborate and invest in startups, while others may try to acquire them. But it’s clear that the future of AI is uncertain and the competition is fierce.

Our AI-Powered Company is Leading the Way

Electric Pipelines is an AI-powered DevOps consultancy dedicated to helping small businesses compete with Big Tech. We understand the challenges that small businesses face when trying to keep up with the fast-paced, ever-changing world of technology. That’s why we’re developing Project Cyborg, a DevOps bot that we will use to help our customers streamline their development and operations processes, and automate repetitive tasks.

With this technology, we will help small businesses gain access to the same capabilities as larger companies, but at a fraction of the cost. With Electric Pipelines, small businesses can focus on what they do best – creating great products and services – while we take care of the rest. Contact us to get started.

Your Small Business Can Compete

We have explored the problem of Big Tech dominating the internet. These companies are squeezing out competition and not providing good services for customers. We have discussed how the concentration of power among a few companies is affecting the internet and its users.

However, we have also highlighted the potential of Artificial Intelligence (AI) as a solution to this problem. Small businesses can use AI to compete with larger companies. AI provides small companies the same capabilities of larger ones at a fraction of the cost. By harnessing the power of AI, small businesses can gain a competitive edge and achieve greater success in today’s fast-paced, technology-driven world. Using AI-powered chatbots, virtual assistants, data analysis, and automating repetitive tasks can help small businesses operate more efficiently and effectively. This in turn can lead to increased competitiveness and growth. In conclusion, AI can be the solution to the problem of the internet being dominated by a few large companies and can help to level the playing field for small businesses and startups.

* indicates required

Recent Posts

Articles Blog

Why People Will Be Disappointed by GPT4

Though Open AI has been on the market since 2020, last November, GPT-3 changed the world. When most people discovered it, they were blown away by all the challenging tasks it could handle for you. From business tasks, like automating customer service, generating high-quality content, or building a chatbot to creative endeavors like writing, drawing, and programming, GPT-3 has changed everything. If GPT-3 has changed the world, what will GPT-4 do? There is such a sense of anticipation around this new iteration of OpenAI’s language model. It can’t help but disappoint.

Don’t Believe the Hype

Media and industry experts are overhyping GPT-4’s capabilities. “The GPT-4 rumor mill is a ridiculous thing,” (OpenAI CEO Sam Altman) said. “I don’t know where it all comes from.” One particularly viral tweet claims that GPT-4 will have 100 trillion “parameters,” compared to GPT-3’s 175 billion parameters, something that Altman called “complete bull” in (an)interview. With each new release of GPT, the model’s capabilities have improved, but the jump from GPT-3 to GPT-4 may not be as significant as some are expecting. This could lead to disappointment among users who were expecting a major leap forward in the model’s capabilities.

What it can do

GPT-4 may not be suitable for all tasks. GPT-4, like its predecessors, is a general-purpose language model. This means that it can perform a wide range of tasks, but it may not excel at any one specific task. For example, GPT-4 may not be as effective at natural language processing tasks as specialized models that have been specifically trained for that task. This could disappoint users who were expecting GPT-4 to outperform specialized models in specific tasks.

Since it is a language model, it is a good writing tool. Businesses will be able to use it to create lots of content fast. It can also help with customer support, by answering queries, and offering personalized support. It can also help in marketing, helping to generate targeted content and ads. The big hope for the next generation of AI is that it will be more human-like, ie., more intuitive, able to pick up on inferences from people, and to make more human-like responses to customers.

It May be Expensive

The cost of using GPT-4 will depend on a number of factors, including how much computational power and memory you need, as well as the specific use case. It’s fair to expect that GPT-4 will be more expensive than its predecessor. As GPT-3 was available via a cloud-based API, users were charged based on the amount of usage, which made it accessible to a wide range of users and businesses. If GPT-4 is not offered through a similar cloud-based API, it may be more difficult and expensive for users to access and use the model.

Additionally, GPT-4 is expected to have increased computational power and memory requirements, which will likely drive up the cost. As with any large AI model, the cost of fine-tuning it to a specific task, data storage and computational power will also be a factor.


While GPT-4 is an exciting development in the field of AI, it’s important to manage expectations and be aware that the model may not live up to the hype. Additionally, GPT-4 may not be suitable for all tasks, disappointing users expecting it to outperform specialized models. Not to mention the expense in training and using it. It’s important to remember that GPT-4 will be a powerful tool, but not a panacea for all natural language processing tasks.

Electric Pipelines can Help

Though GPT-4 won’t be the magic bullet to improve your business, it will be a useful tool. Let Electric Pipelines wield it for you. We are currently GPT-3 powered DevOps, and look forward to stepping up our game with the addition of GPT-4. Don’t miss out on the opportunity to streamline your operations and improve customer satisfaction with GPT-4. Contact us today to learn more about how we can help you harness the power of GPT-4 and take your business to the next level.

* indicates required

Recent Posts

Articles Blog

Call of Duty should stop innovating

The series’ biggest successes don’t come from innovative ideas, but old ones done well.

Call of Duty lost its way.

Call of Duty is one of the oldest franchises in gaming. After Call of Duty 4: Modern Warfare, Activision began releasing new Call of Duty (COD) games every year. That makes 15 games in 15 years. The latest COD game is Call of Duty: Modern Warfare 2. This is the second time they’ve released a game called Call of Duty: Modern Warfare 2. It gets confusing.

Four studios release Call of Duty games: Infinity Ward, Treyarch, Raven Software and Sledgehammer. Infinity Ward started the series and made the first four games, but the original creators, Jason West and Vince Zampella, left in 2010 after contract disputes. Infinity Ward still lives on and makes games in the series, but without its original creative direction.

I’ll be going through the rise of Call of Duty, the decline, and the resurgence.

All Call of Duty games released since Call of Duty 4, in order of release.

The Rise of Call of Duty

Call of Duty 4: Modern Warfare, 2007 -> Call of Duty: Black Ops 2, 2012

The franchise used to be on top of the world. The first three games sold alright and established the franchise. However, starting with Call of Duty 4: Modern Warfare, they released the biggest hits of their era. COD 4 shattered expectations and changed the way the world viewed shooters. It was one of the bestselling games of all time. They released World at War the next year. It deviated from the modern theme, and came close to, but did not exceed Call of Duty 4. Modern Warfare 2 was a return to form. It outsold Call of Duty 4 by almost ten million copies. The year after that, they released Black Ops. It was the biggest game of all time when it launched. It looked like Call of Duty would start breaking records every single year.

They didn’t quite make it. After Black Ops, they released Modern Warfare 3. It didn’t match up to Black Ops, but it was still a huge success. The capstone of Call of Duty’s era of dominance was Black Ops 2. It hit $1 billion dollars in sales faster than any other entertainment property ever to that point. It was also the last time a Call of Duty would reach those heights for a decade.

The Decline of Call of Duty

Call of Duty: Ghosts, 2013 -> Black Ops 4, 2018

What happened? Call of Duty lost its way. 
After the smash hit of Black Ops 2, they released Call of Duty: Ghosts. For the first time since World at War, people saw Ghosts as a step back for the franchise. Black Ops 2 had refined the game to a level Ghost’s couldn’t match. The solution? Change things up.

They made Advanced Warfare, a game which completely changed the way Call of Duty played. Gone were the days of real-world weapons and tight, grounded gunplay. Instead, they introduced flying movement options, robot suits with chain guns, and a futuristic aesthetic. To be fair, Black Ops 2 also had a futuristic theme. However, Advanced Warfare pushed far beyond what Black Ops had been willing to do. Audiences didn’t like it. It sold worse than Ghost and Modern Warfare 2. It heralded the true decline of Call of Duty.

Infinite Warfare (2016) marked a whole-new low for the franchise.

They managed to recover slightly with Black Ops 3. BO3 was another deviation from the standard COD formula. They introduced characters with ultimate abilities, not too dissimilar from games like Overwatch. It didn’t sell well, but it wasn’t the disaster that Infinite Warfare was. 
 The next game, Infinite Warfare, doubled down on a lot of Advanced Warfare’s features. Advanced Warfare failed to outsell Call of Duty: Ghosts, but Infinite Warfare didn’t even outsell Call of Duty: World at War. It was the worst selling COD title since Call of Duty 3. They followed it with WWII, which barely outsold World at War. The last title in their six-year slide was Black Ops 4. It also failed to outsell World at War. Call of Duty looked like it had fallen off.

The Resurgence of Call of Duty

Call of Duty: Modern Warfare, 2019 -> Call of Duty: Modern Warfare 2, 2022

Call of Duty didn’t return to having the largest game launch in history by innovating. They did it by polishing.

There was a time when the Call of Duty franchise innovated. Call of Duty 4: Modern Warfare changed multiplayer shooters forever with its progression system and loadout customization. For years, every other shooter needed to be Call of Duty. Then, the leadership behind the series left. They formed Respawn, who has recently innovated on the Battle Royale genre with Apex Legends. That left multiple studios to maintain the series, but none of the original creative spark. So far, Infinity Ward, Treyarch and Raven have succeeded through polishing, not innovating.

They did it with horde mode. Gears of War created horde mode, a mode where players would defend an area against waves of AI controlled units. Treyarch implemented and furthered the mode with Nazi Zombies, a horde mode in Call of Duty: World at War. They have created the most successful hoard mode of all time.

Call of Duty’s Warzone is outperforming all other BRs on Steam

More recently, they did it with the battle royale (BR). PUBG and Fortnite set the world on fire with the BR genre. Battle Royale games pit a large group of players in an increasingly shrinking arena, akin to the Hunger Games. Every shooter added a hoard mode, and Call of Duty had to do the same. Though it was not world-beating, Treyarch created a BR in Black Ops 4. Infinity Ward showed just how good they were at polishing when they created Warzone. Warzone is one of the biggest BRs in the world. They didn’t add much to the genre. They took a lot of existing ideas and they put a Call of Duty twist on them. They polished, not innovated.

Odd Studio Out

I left a studio out: Sledgehammer. I think Sledgehammer proved that they do know how to innovate. Most of the failed Call of Duty games rolled out half-baked ideas and innovations that didn’t change the core gameplay loop. Advanced Warfare took COD in a new direction. It was the most innovative COD game since Call of Duty 4. The problem was that it wasn’t a COD game.

They changed too much, and the game didn’t feel right to COD players. However, this just shows the potential Activision has with Sledgehammer. A lot of the changes in Advanced Warfare went on to be successful elsewhere. The movement system in the game is similar to the movement system in Apex Legends. The future aesthetic is similar to Titanfall or Halo. In a lot of ways, Sledgehammer carries the torch handed off by West and Zampella. They just haven’t been a good fit on Call of Duty.

The Future of Call of Duty

Call of Duty players just want to play Call of Duty. There’s a lot of crossover between COD players and fans of other shooters. However, time has shown that when people buy a COD game, they are looking for something specific. COD finds itself in an interesting position. On the surface, COD would be perfect as a live-service game. Just release a new COD every 3–5 years instead of every year, and just provide new content for the game during that lifecycle. However, that doesn’t fit the business model. Even poor COD launches made millions, so why pass up on selling a new game?

Call of Duty should embrace what it is and market itself like a sports game. Instead of messing around with Vanguards or WWII, they should release yearly versions of the games. Modern Warfare 2022, Black Ops 23. That way, they can keep polishing the games every year, keep adding new content, but not worry about having reinvent the wheel every single year. People don’t want a new wheel anyway.

Sources: for sales figures.

Steam charts for player data.

Gamerant for COD games ranked by sales.

Recent Posts

Articles Blog

Six companies used to rule gaming. Only two of them still exist.

Photo Credit: Jason from The Wasteland

Titans in Gaming Part 1: The Old Titans

I found a series of articles in Computer Gaming World from the late eighties talking about the “Titans of Gaming.” They covered what they considered to be the five most important game producers. Of the five, two names may be familiar: Electronic Arts and Activision. And the things that are said about them are telling. On Activision:

“Activision’s forte is raw talent and rampant creativity. For optimum effect, this must be harnessed and channeled, not just sprayed around indiscriminately, and the only way to do so is under pressure from the consumer.” Computer Gaming World 38, June 87 (emphasis added).

On EA:

“Some of EA’s games are based on premises which have already been explored by other companies, but even when this is the case, EA’s distinctive style brings new life to the most over-exploited of ideas.” Computer Gaming World 37, May 1987 (emphasis added).

These do not sound like the companies of today. One does not think of “rampant creativity” when they think of Activision. EA still makes games based on premises that have already been explored, but would you say that they have a distinctive style? I would argue that both companies are natural evolutions of their 1980s versions, and that they outlived their competition because of changes they made during the video game crash.

The video game crash

Gaming went through a rough period in the early eighties. The video game crash happened in 1983, and many companies active at the time didn’t make it. The crash primarily affected console makers and console game studios in the US, but it had ripple effects throughout the rest of the industry. Most of the companies we’ll be looking at today made PC games during the 80s, which did not take as much of a hit as console games. Still, you will see how the turbulence caused a need for change that some could weather, and others could not.

The Casualties


Epyx is one of the failed titans of the gaming industry. They had their heyday during the video game crash. They specialized in action games. One of their most popular games was called California Games. It was a different time. During this era, action games did not dominate the landscape like they do today. Most AAA games today would be considered action games then. During that era, adventure games had a larger foothold in the PC marketplace, and action games did better in the arcade and on consoles. Unfortunately for Epyx, the console market crashed for a few years.

Epyx’s California Games is one of the few Epyx titles still being sold

Epyx failed the way many of the companies in that era did: they failed to adapt. They made most of their money on the Commodore 64, and they refused to make games for Nintendo systems because they didn’t want to pay the licensing fees. They ended up in a situation where they wasted money on old systems, and when they did move forward, they made a deal to make games for the Atari Lynx. They ended up being overextended, and they had to file for bankruptcy. They sold off their properties piecemeal and have left virtually no footprint on the gaming landscape.


Infocom specialized in a type of game that doesn’t exist anymore: text-based adventures. They created Zork, which you may have heard of, and many others you haven’t. They relied on text-based games for their entire lifespan. They dominated in the PC market in the 70s. They survived the video game crash initially because they didn’t rely on the console market. In the 80s, though, their limited range started to hurt them. Graphical games were growing in popularity. Infocom’s solution? Marketing. They argued that graphics were overrated compared to the power of human imagination. For a while, people believed them.

Infocom’s campaign against graphics in games.

Their lack of range hurt Infocom, but their non-game efforts forced them to sell the company. Infocom tried to expand into business software while continuing in games. This meant that they dropped a huge amount of money into a database product called Cornerstone. Unfortunately, Cornerstone flopped, and they didn’t have enough money to keep the doors open long-term. They laid off half of their staff and sold the company to Activision. Activision would close the studio entirely not long after. Overall, the value of the Infocom library was not as high as it could have been. Zork remained a valuable property for years, but the rest of their adventures faded into obscurity. Their inability to adapt to the graphical era ended up ruining the brand in the long run.


MicroProse formed right before the video game crash. They specialized in PC games, though, so the crash didn’t force them out of business right away. They made simulation games and started a number of game series that are ongoing today. Sid Meier was one of three founders. MicroProse released XCOM and Civilization.

MicroProse’s most successful franchise, Civilization, outlived the company by decades.

Of the three dead titans, MicroProse’s library would transfer the best to modern day. They made many vehicle simulation games, like Solo Flight and F15 Strike. Vehicle simulation games thrive now as a niche market. Their other specialization, strategy games, has grown into a larger niche now than it was at the time. XCOM and Civilization, two strategy games MicroProse invented, have both gone on to be top games in the strategy genre. In fact, both titles are produced by Firaxis Games, a studio formed by former MicroProse members.

Ultimately, MicroProse fell prey to two major problems. The first is talent drain, a common theme among all these fallen companies. The most well-known man to leave MicroProse was Sid Meier. However, when Sid Meier left, he also left with other talented leaders and developers. The other thing that MicroProse failed in was diversification. Their niches weren’t large enough in the 80s and 90s to support their business, so they looked for other options. Bill Stealey, one of the founders, insisted on investing in arcade games. Sid Meier disagreed and ended up leaving over that disagreement. MicroProse’s arcade games failed, and they went public to pay back the debts that they had accrued in the arcade business. They limped along for a few decades as the team shrank and they produced fewer and fewer games. Firaxis, the company formed from former MicroProse talent, is alive and thriving to this day.


Atari wasn’t mentioned in the Titans of Gaming series, but it definitely fit the bill. Atari used to be synonymous with console gaming, and they also played a core role in the video game crash. They made the most successful gaming console of their generation: the Atari 2600. They popularized the gaming console. They started out as a game studio, making arcade ports and a few original games. Nolan Bushnell, Atari’s founder, wanted to build a console, but knew the company couldn’t afford to bring it to market. So, he sold the company to Warner Communications. The 2600 succeeded, but Bushnell and Warner had a falling out, so Warner let him go. Just five years later, they became to the main cause of the video game crash.

The Atari 2600 was synonymous with gaming for years.

They failed in many ways, but ultimately, they overextended and produced too many games that they couldn’t sell. They produced a poor port of Pac-Man for the 2600, and they printed more copies of the game than existing 2600s, expecting it to sell the console. They failed to sell all those copies, and so had to eat the manufacturing costs of printing all of those cartridges. Then they made what is widely cited as the worst game of all time: E.T. Atari rushed E.T. out the door with six weeks of development time. They failed to sell many of the two million copies they shipped.

All these things come back to a company that had become bloated, and who thought they could sell video games regardless of their quality. When the games market flooded, they had too many failed titles and excess stock to stay afloat. Warner Communications sold the company, and it looked like the end of gaming.

Fall of the Titans

Each of those companies was a titan in their heyday. Some still exist as brand names, like Atari and MicroProse, without any of the people or properties they used to have. Others are gone completely. What do they have in common?

1. Failure to adapt to the shifting market: Although Epyx, Atari, MicroProse and Infocom all made different types of games/consoles, they all fell apart within ten years of each other. The 80s and 90s were a period of rapid change in gaming, and these companies didn’t follow along with it.

2. Loss of original leadership: Most of these companies lost prominent members around this era. Atari lost its founder and MicroProse lost Sid Meier. The talent drain crippled their ability to adapt.

Now, let’s look at some of the survivors.

The survivors

Gaming had a lot of growing to do, and it bounced back better after the video games crash. However, most of the existing market was replaced. Most, but not all. I want to highlight some of the survivors of the video game crash and show why they are still winners today.


Activision is an old game company. They were, in fact, the first company to make third-party console games. Former Atari employees started the company in 1979. They made Atari 2600 games. And just a few years after they opened, the console market crashed. They pivoted a few times, first to PC games, then to PC software. They even acquired Infocom. None of it worked. However, a young businessman saw potential in the company, and you might recognize his name: Bobby Kotick.

Bobby Kotick has owned Activision for over 30 years 

Kotick saw the value of Activision for its brand name, so he bought it. Kotick led the company through bankruptcy and moved them back to basics. He focused on IP. He went through Activision’s back catalog and republished what he could, and he released sequels to things that worked, like Zork. Business trended back up.

They used their success to go on an acquisition spree. They bought many well-known companies, some of which they still hold. Raven Software, Infinity Ward and Treyarch are all acquisitions that Activision made in the late 90s/early 2000s and now all three work on Call of Duty. They expanded out across genre and platform, and succeeded as a business first, and a producer of games second.

Bobby Kotick saved Activision, and he is still at the helm. In many ways, they are still the same company. They use their IP and back-catalog on a level rivaled only by Nintendo. They do not hesitate to gut companies under them when they underperform. They are not the company they were pre-Kotick, known for “raw talent and rampant creativity.” Often, raw talent leaves Activision to form their own studios. However, they are a true titan of gaming.


EA was a company formed on the eve of the video game crash. They started in 1982, and they focused on home computer games, not console games. Unlike Activision, or Infocom, or Epyx, EA was not formed by a passionate group of game developers. It was the brainchild of Trip Hawkins, a former Marketing Director at Apple. He saw the business of games as being profitable, and so he formed a game publisher, not a developer. The EA formed then was not too dissimilar from the one today.

Sports games have been a cornerstone of EA’s business since the 80s

EA got its big break making sports games. To this day, sports games make up a large portion of their revenue. Sports games were competitive at the time, and EA gained an advantage through smart partnerships. First, they partnered with NBA players Dr. J and Larry Bird to make a one-on-one basketball game. The game succeeded, but not on the level of their next major sports title, Madden. They partnered with John Madden and worked closely with him to hone the gameplay. In both cases, EA used its partnerships to improve gameplay and to advertise the game. EA’s success throughout the video game crash came down to smart business decisions. They were ahead of their time, behaving more like a big tech company than an old-school game developer.


Nintendo wasn’t listed as a Titan in Gaming in the 80s, but they would definitely qualify. I won’t go into much detail about them here, because I already wrote another article on their president during this era. They succeeded by making an affordable console, and by having strong game IP for decades. Their president had a strong hand in both.

The new titans

How would the titans in gaming list look today? Well, we already know two of the names. To keep with the spirit of the CGW list, we will ignore the console makers. These are the five most important game publishers today.

1. Tencent

2. NetEase

3. EA

4. Activision

5. Bandai Namco

Soon, I’ll cover what makes these the new titans in gaming.


The Ultimate History Of Video Games Revisited (retrieved from

Computer Gaming World Issues 36–41

CGW Issue 53 (on Activision)

Recent Posts

Articles Blog

The real story behind the Activision-Blizzard acquisition drama

Sony has a lot to fear from the Activision-Blizzard acquisition, and it has little to do with Call of Duty

A business move has dominated gaming news for the last month. Not new game announcements, or a new console, or tech or a service. We’ve been caught up in the drama around a business deal and the fallout from it. We’ve been caught up in a new saga of Xbox vs PlayStation, with Activision-Blizzard as the newest wrinkle. Microsoft offered almost $70 billion buy Activision-Blizzard in January of 2022. Activision-Blizzard is made up of three game publishers: Activision, Blizzard and King. Activision has been one of the biggest and most consistent game publishers for over thirty years. They own Call of Duty, the biggest console-gaming franchise in the world. King dominates on mobile. You may have heard of Candy Crush, which King produces. Candy Crush has been a top-ten grossing mobile game for years running. Blizzard rounds out the three. Blizzard has had a rough few years, but they still own World of Warcraft, Diablo and Overwatch. Overall, Activision-Blizzard owns the most valuable third-party game catalog in the world. Microsoft wants to own that.

Microsoft may have jumped the gun a little bit on their promo graphics.

Blizzard accepted Microsoft’s deal. Microsoft announced the deal with fanfare and talked about all the things they would do with the Activision-Blizzard catalog. Not all were pleased.

Government Intervention

The deal has not gone off as smoothly as Microsoft had hoped. Microsoft is a global company, and so they are subject to scrutiny in many countries. Some countries, like Brazil, have already cleared the deal. However, others are holding it up. Most of the news around the acquisition right now centers on the Competition and Markets Authority (CMA) in the United Kingdom. The CMA won’t approve the acquisition until they finish their investigation. Microsoft is taking the CMA seriously. The CMA recently required Meta to sell Giphy, a GIF sharing website it owned. They reasoned that Meta could restrict access to GIFs from other social media sites, gaining an uncompetitive advantage. They hold the power to cancel the Activision-Blizzard acquisition as well.

The CMA building in London

The CMA has called into question a lot of things regarding Microsoft and Sony’s position in gaming. Why Sony? Sony has been talking openly to the CMA about the perceived harm that Microsoft would inflict on the industry. They brought up the power of Call of Duty, and called it an “essential game”. The CMA entertained that argument. Microsoft countered by bringing up Nintendo’s success. The Nintendo Switch doing great without even having Call of Duty on their platform. Then, Sony pivoted, and brought up the danger of Microsoft making Activision-Blizzard games exclusive. They stated with console exclusivity, and then pivoted again to the dangers of Microsoft putting Call of Duty on Game Pass. Microsoft CEO Satya Nadella argued that Xbox is in third place in the console wars. Sony, in his estimation, is winning. “So if this is about competition, let us have competition,” Nadella said.

All of this can come off as childish. Maybe Sony is being a sore loser, or Microsoft is deliberately misrepresenting their strength as a company to get the acquisition to go through. It runs deeper than that, and the arguments to this are revealing a lot about Microsoft and Sony’s business models and views on gaming.

Microsoft’s Long-Term Strategy with Xbox

Xbox is prioritizing the cloud. Photo by Muha Ajjan on Unsplash

Xbox struggled during the Xbox One era. The Xbox One underperformed in sales, losing both to the PlayStation 4 and the Nintendo Switch. Microsoft had to cancel or delay big exclusives like Scalebound and Halo Infinite, and the exclusives that did come out didn’t compete well with PlayStation exclusives. With all this in mind, Microsoft decided to shift their focus. They stopped focusing on selling consoles, and instead focused on building the Xbox brand outside of consoles

They created Game Pass, a game subscription service similar to Netflix. They moved in 2018 to put all their first-party games on Game Pass day one. At the time, you could buy a year’s subscription to Game Pass for $60. For the price of one game, you could have access to every new Xbox release and a library of other games, too. They launched Game Pass on PC the next year, at the same introductory price point. Microsoft decided to make a long-term investment. They pivoted away from the traditional console strategy. 

Nintendo sells their first party games for full price years after release. Mario Kart 8: Deluxe has been a top-10 seller for boxed games since its release in 2017 and it hasn’t seen a price decrease. PlayStation games don’t hold that much value, but as we’ll talk about soon, they still rely on first-party AAA game sales at $70 apiece. They’ve released a few first-party games in the top-ten grossing for the year. Microsoft has released zero. Microsoft still sells their first-party games standalone, but they have shown that they don’t plan to win in gaming the same way as Sony or Nintendo.

Microsoft is trying to play the long game on the console wars. The Xbox Series S has the worst specs of current generation consoles, but it runs Game Pass games just fine. With cloud streaming, Microsoft could also extend the life of the Series S beyond when its hardware falls too far behind. They sell both the Series X and the Series S at a loss, but the Series S is sold at a higher loss. With chip shortages, both Microsoft and Sony have a hard time producing enough premium consoles to meet demand. The Series S gives Microsoft another way to push their platform even when they face supply constraints. 

Xbox as a platform makes most of its revenue digitally. AAA games sales don’t matter as much to Microsoft as they do to Sony. However, they do care about subscription revenue. The Brazilian investigation of the acquisition forced Microsoft and Sony to report how much they make from subscription services. Microsoft made approximately 20% of their revenue on Game Pass alone. The exact number for PlayStation Plus hasn’t been released publicly, but we know that it is lower. These numbers only take console data into account, so Game Pass is outperforming PlayStation Plus by even more if you take PC into account.

In this light, the Activision-Blizzard acquisition is just another step in a long-term plan. Microsoft could put Activision-Blizzard games on Game Pass and get their subscription revenues up. They could also sell the games on other platforms, like they already do with Bethesda. Over time, the percentage of their revenue that they make selling Xbox consoles and games will drop, and they will make more money from subscriptions and sales on other platforms.

Sony’s strategy with PlayStation

Photo by Kerde Severin on Unsplash

Sony’s strategy with PlayStation is closer to the traditional console strategy. They make more money than Microsoft on console sales, and they make more money on AAA game sales. They also make more money on boxed game sales than Microsoft. They have higher revenue and profit from gaming right now than Microsoft does. 

For these reasons, Sony wants to maintain the status quo in gaming. As long as consumers are buying their AAA games for $70 and buying them multiple times a year, Sony will maintain and build on their lead. They have more first- and second- party AAA IP, and these games have sold better than Xbox exclusives. They lose these advantages if they have to follow Microsoft. They don’t have enough IP or cash-flow to buy IP to make PlayStation Plus compete with Game Pass on value. If they started releasing games on PC the same day as on PlayStation, they would cannibalize their console sales and their full-price AAA game sales. They cannot compete on infrastructure, either, so cloud gaming and subscription services are not going to be as viable for them.

The Sticking Point: Game Subscriptions and Cloud Gaming

Photo by C Dustin on Unsplash

All of this comes back to cloud gaming and subscription services. The CMA agrees. The two points of competition that they keep coming back to are cloud gaming and subscription services. Here’s how they laid it out in their issues statement

“The Merger gave rise to a realistic prospect of an SLC [substantial lessening of competition] as a result of vertical effects arising from:

A: Microsoft withholding or degrading Activision’s content — including popular games such as CoD [Call of Duty] — from other consoles or multi-game subscription services; and

B: Microsoft leveraging its broader ecosystem together with Activision’s game catalogue to strengthen network effects, raise barriers to entry and ultimately foreclose rivals in cloud gaming services.”

They worry that Microsoft could leverage Activision content to force other multi-game subscriptions and cloud gaming services out of business.

Microsoft created a rebuttal, arguing against both points. On cloud gaming, they argued that the technology is still new, and that they only advance cloud gaming in general through innovation. On multi-game subscriptions, they argued that “Multi-game subscriptions are a means of payment — not a market.” You can find the full arguments here, as they are too extensive to quote.

Whether subscriptions are an innovation or just a different means of pay, they are changing the gaming landscape as we know it. Microsoft is hitching their future in gaming to Game Pass. Sony sees that as a threat, and so they are trying to convince the CMA to shut the acquisition down. If the deal goes through, we will see if Sony adapts to the subscription landscape, or if the gaming ecosystem becomes more stratified. We could see a world where Xbox, PlayStation and Nintendo do not directly compete with one another at all in gaming.

Recent Posts