Articles Blog Engineering

Visual Prompting: LLMs vs. Image Generation

We’ve been trying a lot of different things in Project Cyborg, our quest to create the DevOps bot. The technology around AI is complicated and evolving quickly. Once you move away from Chat Bots and start making more complicated things, like working with embeddings and agents, you have to hold a lot of information in your mind. It would be nice to visualize this info.

Visual prompting is what we were looking for, and it’s more complicated than I expected.

Visual Prompting for Image Generation

I’ve been working with LLMs and text generation almost exclusively in AI. I haven’t had much need to use image generation. The tech is really interesting, but not very useful for creating a DevOps bot. However, I did hear about Chainner, a visual composer for image generation. Its interface will be familiar to you if you’ve worked with a node-based shader editor before.

chaiNNer - work4ai
Example of Chainner for images

This is a really cool way of working with image generation LLMs. Instead of working in Python to create images, you can work on them visually. Some things just make more sense mapped out visually. This could help us to mentally simplify some of the complex tasks we’re dealing with. This made me wonder: could I modify Chainner to work with LLMs?

Chainner for LLMs

Chainner doesn’t have anything built-in for LLMs. I’m not really surprised. However, it is well designed, and so it wasn’t very difficult to see how I would implement it myself.

I started with a simple LLM node. Here’s a sample from the code:

class PromptNode(NodeBase):
    def __init__(self):
        self.description = "This is a node for making LLM prompts through an agent"
        self.inputs = [
                option_labels={k: k.value for k in LLMOptions},
            BoolInput("Use Google"),
            BoolInput("Use Vectorstore"),
            DirectoryInput("Vectorstore Directory", has_handle=True).make_optional(),
                "Embeddings Platform",
                option_labels={k: k.value for k in EmbeddingsOptions},
        self.outputs = [

        self.category = LLMCategory = "LLM Agent"
        self.icon = "MdCalculate"
        self.sub = "Language Models"
LLM node example

With that working, I moved on to creating a node for a Vectorstore (aka embeddings).

class VectorstoreNode(NodeBase):
    def __init__(self):
        self.description = "This is a node for loading a vectorstore"
        self.inputs = [
                "Embeddings Platform",
                option_labels={k: k.value for k in EmbeddingsOptions},
            DirectoryInput("Vectorstore Directory", has_handle=True),
        self.outputs = [

        self.category = LLMCategory = "Load Vectorstore"
        self.icon = "MdCalculate"
        self.sub = "Language Models"
A vectorstore node

You get the idea of the workflow. At the end of my experimenting, I ended up with a sample graph that looked like this:

An agent example

Roadblocks for Chainner

It’s about here that I had to abandon the experiement.

It was looking cool, and I liked the concept. There was only one problem: it wasn’t going to work, not as Chainner was designed. I don’t want to get too deep into the weeds on it, but there’s a dependency issue. We’re using self-hosted embeddings on some of our vectorstores in Project Cyborg. That means that we’re using open-source AI models for some of the embeddings. To do this, we’re spinning up spot-instances on Lambda Labs. One of the python libraries you need to run self-hosted embeddings only works on Unix-based systems (shoutout to file-path faux pa). That’s not a problem if you’re working in VSCode or a command line. It is a problem when you need to run an app with a GUI on Windows.

There are also some other problems with the visual scripting in general that stopped me from pursuing it further.

Other Visual Prompting Solutions

The day after I decided to stop pursuing the Chainner option, Langflow was released.

Look familiar?

Langflow is a visual interface for Langchain. So, basically exactly what I was doing. It is very new, and under development, but it does some things very well. If you’re looking to create a simple app using an Agent, and you don’t know python, Langflow gives you an option. It doesn’t currently support exporting to code, so it has limited usage in production. You could treat it as interactive outlining.

It does highlight the biggest problem currently with visual prompt engineering: you still need to have a strong understanding of the systems at play. To even use Langflow, you have to understand what a zero-shot agent is, and how that interacts with an LLM chain, and how you would create tools and supply them to the agent. You don’t really gain a lot in terms of complexity reduction. You also lose so much in terms of customization. Unless you customize your nodes so that you can provide every single perimeter that the background API supplies, you have to create tons of separate, very similar nodes. For LLMs, visual graphs are only very useful for small tasks.

Ultimately, the existing solutions serve a purpose, but they don’t really reduce the cognitive load of working with LLMs. You still need to know all of the same things, you just might be able to look at it in a different light. With everything changing so fast, it makes more sense for us to stick with good, old-fashioned programming. Hopefully, visual prompting will catch up, and be useful for more than image processing and chatbots.

Recent Posts

Articles Blog Engineering

How to take the brain out of the box: AI Agents

An AI Agent at work answer questions ChatGPT can't
An AI Agent at work answers questions ChatGPT can’t

Working with LLMs is complicated. For simple setups, like general purpose chatbots (ChatGPT), or classification, you have few moving pieces. But when it’s time to get serious work done, you have to coax your model into doing a lot more. We’re working on Project Cyborg, a DevOps bot that can identify security flaws, identify cost-savings opportunities in your cloud deployments and help you to follow best practices. What we need is an AI Agent.

Why do we need an agent?

Let’s start at the base of modern AI: the Large Language Model (LLM).

LLMs work on prediction. Give an LLM a prompt, and it will try and predict what the right answer is (a completion). Everything we do with AI and text generation is powered by LLMs. GPT-3, GPT-3.5 and GPT-4 are all LLMs. The problem with this is that they are limited to working with initial training data. These models cannot access the outside world. They are a brain in a box.

You have a few different options depending on your use-case. You can use fine-tuning, where you undergo another training stage. Fine tuning is excellent, and has a lot of use cases (like classification). It still doesn’t let you use live data. You can also use embeddings. This lets you extend the context length (memory) of your AI to give so that it can process more data at once. Embeddings help a lot, but they don’t help the LLM take action in the outside world.

The other option is to use an AI agent.

What is an Agent?

Here’s the simplest definition:

An AI agent is powered by an LLM, and it uses tools (like Google Search, a calculator, or a vectorstore) to interact with the outside world.

That way, you can take advantage of the communication skills of an LLM, and also work on real-world problems. Without an agent, LLMs are limited to things like chatbots, classification and generative text. With agents, you can have a bot that can pull live information and make changes in the world. You’re giving your brain in a box a body.

How can we do this? Well, I’m going to be using Langchain, which comes with multiple agent implementations. These are based on ReAct, a system outlined in a paper by Princeton University professors. The details are complicated, but the implementation is fairly simple: you tell your AI model to respond in a certain style. You ask them to think things through step by step, and then take actions using tools. LLMs can’t use tools by default, so they’ll try and make up what the tools would do. That’s when you step in, and do the thing the AI was trying to fake. For example, if you give it access to Google, it will just pretend to make a Google Search. You set up the tools so that you can make an actual Google Search and then feed the results back into the LLM.

The results can seem magical.

Example: AI Agent with Google Search

Let’s start with a simple agent that has access to two tools.

from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.llms import OpenAI
# We'll use an OpenAI model (Davinci by default) as the "brain" of our agent
llm = OpenAI(temperature=0)

# We'll provide two tools to the agent to solve problems: Google, and a tool for handling math
tools = load_tools(["google-search", "llm-math"], llm=llm)

# This agent is based on the ReAct paper
agent = initialize_agent(tools, llm, agent="zero-shot-react-description", verbose=True)

while True:
    prompt = input("What would you like the agent to tell you (press CTRL+C to quit)?")

These agent examples look the best in video form:

Example: AI Agent with Access to External Documents (Vectorstore)

Here’s another example that uses a tool to pull information about Azure. I converted the official Azure documentation into a Vectorstore (aka embeddings). This is being used by Project Cyborg so that our DevOps bot can understand best practices and the capabilities of Azure.

tools = [
        name = "Azure QA System",
        description="useful for when you need to answer questions about Azure. Input should be a fully formed question.",

Here’s it in action:

AI Agents make LLMs useful

Chatbots are cool. They are very useful for many things. They can’t do everything, though. Most of the time, your AI will need access to live info, and you’d like for it to be able to do things for you. Not just be a very smart brain that can talk. Agents can do that for you. We’re figuring out how we can use them here at Electric Pipelines. If you want help figuring out how agents could help your business, let us know! We’d be happy to talk.

* indicates required

Recent Posts

Articles Blog Engineering

What does AI Embedding have to do with Devops?

AI embeddings are powerful. We’re working on Project Cyborg, a project to create a DevOps bot.

There’s a lot of steps to get there. Our bot should be able to analyze real-world systems and find our where we could implement best practices. It should be able to look at security systems and cloud deployments to help us better serve our customers.

To that end, our bot needs to know what best practices are. All of the documentation for Azure and AWS is available for free, and it’s searchable. However, online documentation doesn’t help with problem solving. It only helps if you have someone capable running a search. We want to be able to search based on our problems and real-world deployments. The solution: embeddings.

AI Embeddings

Here’s the technical definition: Text embeddings measure the relatedness of text strings.

Let’s talk application: embeddings allow us to compare the meaning of sentences. Instead of needing to know the right words for what you’re searching for, you can search more generally. Embedding enables that. 

Embeddings work by converting text into a list of numbers. Then, those numbers can be compared to one another later, and similarities can be found that a human couldn’t detect. Converting text to embeddings is not terribly difficult. OpenAI offers an embedding model that runs off of Ada, their cheapest model. Ada has a problem, though.

Ada has a memory problem

Ada is a powerful model, and if it can keep track of what it’s supposed to be doing, it does excellent work. However, it has a low context length, which is just a fancy way saying it has Alzheimer’s. So, you can’t give Ada long document and have it remember all of it. It can only hold on to a few sentences in its memory at a time. More advanced models, like Davinci, have much better memory. We need a way to get Ada to remember more.


We’ve been using langchain for a few different parts of Project Cyborg, and it has a great tool in place for embedding as well. It has tools to split documents up into shorter chunks, so that Ada can process them one at a time. It can then store these chunks together into a Document Store. This acts as long-term memory for Ada. You can embed large documents and collections of documents together, and then access them later.

            By breaking it up documents into smaller pieces, it allows you to search your store for chunks that would be relevant. Let’s go over some examples.

You can see the document (data-factory.txt) and the different chunks (5076, 234, 5536) it’s pulling from for the answer
In this case, it pulls from multiple different documents to formulate an answer

Here you can see that we ask a question. An AI-model ingests our question, and then checks its long-term memory, our document store for the answer. If it knows the answer, it will reply with an answer and reference where it got that answer from.

Fine-Tuning vs. Embedding

Embeddings are different from fine-tuning in a few ways. Most relevant, embeddings are cheaper and easier to run, both for in-house models and for OpenAI models. Once you’ve saved you documents into a store, you can access them using few tokens and with off-the-shelf models. The downside comes in the initial embedding. To convert a lot of documents to an embedded format, like we needed to, takes millions of tokens. Even at low rates, that can add up.

Fine Tuned usage is significantly more expensive across the board

On the flip side, fine-tuning will typically use far fewer tokens than embedding, so even though the cost per token is much higher, it can be cheaper to fine-tune a model than to build out an embedded document store. However, running a fine-tuned model is expensive. If you use OpenAI, the cost per token is 4X the price of an off-the-shelf model. So, pick your poison. Some applications can take advantage of the higher initial cost, but then the cheaper cost of processing later.

Recent Posts

Articles Blog Engineering

Using Classification to Create an AI Bot to Scrape the News


We’re hard at work on Project Cyborg, our DevOps bot designed to enhance our team to provide 10x the DevOps services per person. Building a bot like this takes a lot of pieces working in concert. To that end, we need a step in our chain to classify requests: does a query need to go to our Containerization model or our Security model? The solution: classification. A model that can figure out what kind of prompt it has been given. Then, we can pass along the prompt to the correct model. To test out the options on OpenAI for classification, I trained a model to determine if news articles would be relevant to our business or not.

Google News

I started by pulling down the articles from Google News.

from GoogleNews import GoogleNews
start_date = '01-01-2023'
end_date = '02-02-2023'
search_term = "Topic:Technology"

This way, I can pull down a list of Google News articles with a certain search term within a date range. Google News does not do a good job of staying on topic by itself.

The second result Google News returns here already moves away from what we searched for

So, once I have this list of articles with full-text and summaries, I loaded them into a dataframe using Pandas and output that to an Excel sheet.

for i in range(2,20):
    # Grab the page info for each article
    # Create a Pandas Dataframe to store the article
An example of some of the Google News data we pulled in

Fine-Tuning for Classification

Then comes the human effort. I need to teach the bot what articles I consider relevant to our business. So, I took the Excel sheet and added another column, Relevancy.

The updated spreadsheet had a column for relevancy

I then manually ran down a lot of articles, looked at titles, summaries and sometimes the full text, and marked them as relevant or irrelevant.

Then, I took the information I have for each article, title, summary, and full-text, and combined them into one column. They form the prompt for the fine tuning. Then, the completion is taken from the relevancy column. I put these two columns into a csv file. This will be the training set for our fine-tuned model.

Once I had the dataset, it was time to train the model. I ran the csv through OpenAI’s data preparation tool.

OpenAI’s fine tuning data preparation tool make sure your dataset is properly formatted for fine tuning

I got out our training dataset and our validation dataset. With that in hand, it was time to train a model. I selected Ada, the least-advanced GPT-3 model available. It’s not close to ChatGPT, but it is good for simple things like classification. A few cents and half an hour later, I have a fine-tuned model. 


I can now integrate the fine-tuned model into my Google News scraping app. Now, it can pull down articles from a search term, and automatically determine if they are relevant or not. The relevant ones go into a spreadsheet to be viewed later. The app dynamically builds prompts that match the training data, and so I end up with a spreadsheet with only relevant articles.

A table with Google News articles only relevant to our company

Recent Posts

Articles Blog

Call of Duty should stop innovating

The series’ biggest successes don’t come from innovative ideas, but old ones done well.

Call of Duty lost its way.

Call of Duty is one of the oldest franchises in gaming. After Call of Duty 4: Modern Warfare, Activision began releasing new Call of Duty (COD) games every year. That makes 15 games in 15 years. The latest COD game is Call of Duty: Modern Warfare 2. This is the second time they’ve released a game called Call of Duty: Modern Warfare 2. It gets confusing.

Four studios release Call of Duty games: Infinity Ward, Treyarch, Raven Software and Sledgehammer. Infinity Ward started the series and made the first four games, but the original creators, Jason West and Vince Zampella, left in 2010 after contract disputes. Infinity Ward still lives on and makes games in the series, but without its original creative direction.

I’ll be going through the rise of Call of Duty, the decline, and the resurgence.

All Call of Duty games released since Call of Duty 4, in order of release.

The Rise of Call of Duty

Call of Duty 4: Modern Warfare, 2007 -> Call of Duty: Black Ops 2, 2012

The franchise used to be on top of the world. The first three games sold alright and established the franchise. However, starting with Call of Duty 4: Modern Warfare, they released the biggest hits of their era. COD 4 shattered expectations and changed the way the world viewed shooters. It was one of the bestselling games of all time. They released World at War the next year. It deviated from the modern theme, and came close to, but did not exceed Call of Duty 4. Modern Warfare 2 was a return to form. It outsold Call of Duty 4 by almost ten million copies. The year after that, they released Black Ops. It was the biggest game of all time when it launched. It looked like Call of Duty would start breaking records every single year.

They didn’t quite make it. After Black Ops, they released Modern Warfare 3. It didn’t match up to Black Ops, but it was still a huge success. The capstone of Call of Duty’s era of dominance was Black Ops 2. It hit $1 billion dollars in sales faster than any other entertainment property ever to that point. It was also the last time a Call of Duty would reach those heights for a decade.

The Decline of Call of Duty

Call of Duty: Ghosts, 2013 -> Black Ops 4, 2018

What happened? Call of Duty lost its way. 
After the smash hit of Black Ops 2, they released Call of Duty: Ghosts. For the first time since World at War, people saw Ghosts as a step back for the franchise. Black Ops 2 had refined the game to a level Ghost’s couldn’t match. The solution? Change things up.

They made Advanced Warfare, a game which completely changed the way Call of Duty played. Gone were the days of real-world weapons and tight, grounded gunplay. Instead, they introduced flying movement options, robot suits with chain guns, and a futuristic aesthetic. To be fair, Black Ops 2 also had a futuristic theme. However, Advanced Warfare pushed far beyond what Black Ops had been willing to do. Audiences didn’t like it. It sold worse than Ghost and Modern Warfare 2. It heralded the true decline of Call of Duty.

Infinite Warfare (2016) marked a whole-new low for the franchise.

They managed to recover slightly with Black Ops 3. BO3 was another deviation from the standard COD formula. They introduced characters with ultimate abilities, not too dissimilar from games like Overwatch. It didn’t sell well, but it wasn’t the disaster that Infinite Warfare was. 
 The next game, Infinite Warfare, doubled down on a lot of Advanced Warfare’s features. Advanced Warfare failed to outsell Call of Duty: Ghosts, but Infinite Warfare didn’t even outsell Call of Duty: World at War. It was the worst selling COD title since Call of Duty 3. They followed it with WWII, which barely outsold World at War. The last title in their six-year slide was Black Ops 4. It also failed to outsell World at War. Call of Duty looked like it had fallen off.

The Resurgence of Call of Duty

Call of Duty: Modern Warfare, 2019 -> Call of Duty: Modern Warfare 2, 2022

Call of Duty didn’t return to having the largest game launch in history by innovating. They did it by polishing.

There was a time when the Call of Duty franchise innovated. Call of Duty 4: Modern Warfare changed multiplayer shooters forever with its progression system and loadout customization. For years, every other shooter needed to be Call of Duty. Then, the leadership behind the series left. They formed Respawn, who has recently innovated on the Battle Royale genre with Apex Legends. That left multiple studios to maintain the series, but none of the original creative spark. So far, Infinity Ward, Treyarch and Raven have succeeded through polishing, not innovating.

They did it with horde mode. Gears of War created horde mode, a mode where players would defend an area against waves of AI controlled units. Treyarch implemented and furthered the mode with Nazi Zombies, a horde mode in Call of Duty: World at War. They have created the most successful hoard mode of all time.

Call of Duty’s Warzone is outperforming all other BRs on Steam

More recently, they did it with the battle royale (BR). PUBG and Fortnite set the world on fire with the BR genre. Battle Royale games pit a large group of players in an increasingly shrinking arena, akin to the Hunger Games. Every shooter added a hoard mode, and Call of Duty had to do the same. Though it was not world-beating, Treyarch created a BR in Black Ops 4. Infinity Ward showed just how good they were at polishing when they created Warzone. Warzone is one of the biggest BRs in the world. They didn’t add much to the genre. They took a lot of existing ideas and they put a Call of Duty twist on them. They polished, not innovated.

Odd Studio Out

I left a studio out: Sledgehammer. I think Sledgehammer proved that they do know how to innovate. Most of the failed Call of Duty games rolled out half-baked ideas and innovations that didn’t change the core gameplay loop. Advanced Warfare took COD in a new direction. It was the most innovative COD game since Call of Duty 4. The problem was that it wasn’t a COD game.

They changed too much, and the game didn’t feel right to COD players. However, this just shows the potential Activision has with Sledgehammer. A lot of the changes in Advanced Warfare went on to be successful elsewhere. The movement system in the game is similar to the movement system in Apex Legends. The future aesthetic is similar to Titanfall or Halo. In a lot of ways, Sledgehammer carries the torch handed off by West and Zampella. They just haven’t been a good fit on Call of Duty.

The Future of Call of Duty

Call of Duty players just want to play Call of Duty. There’s a lot of crossover between COD players and fans of other shooters. However, time has shown that when people buy a COD game, they are looking for something specific. COD finds itself in an interesting position. On the surface, COD would be perfect as a live-service game. Just release a new COD every 3–5 years instead of every year, and just provide new content for the game during that lifecycle. However, that doesn’t fit the business model. Even poor COD launches made millions, so why pass up on selling a new game?

Call of Duty should embrace what it is and market itself like a sports game. Instead of messing around with Vanguards or WWII, they should release yearly versions of the games. Modern Warfare 2022, Black Ops 23. That way, they can keep polishing the games every year, keep adding new content, but not worry about having reinvent the wheel every single year. People don’t want a new wheel anyway.

Sources: for sales figures.

Steam charts for player data.

Gamerant for COD games ranked by sales.

Recent Posts

Articles Blog

Six companies used to rule gaming. Only two of them still exist.

Photo Credit: Jason from The Wasteland

Titans in Gaming Part 1: The Old Titans

I found a series of articles in Computer Gaming World from the late eighties talking about the “Titans of Gaming.” They covered what they considered to be the five most important game producers. Of the five, two names may be familiar: Electronic Arts and Activision. And the things that are said about them are telling. On Activision:

“Activision’s forte is raw talent and rampant creativity. For optimum effect, this must be harnessed and channeled, not just sprayed around indiscriminately, and the only way to do so is under pressure from the consumer.” Computer Gaming World 38, June 87 (emphasis added).

On EA:

“Some of EA’s games are based on premises which have already been explored by other companies, but even when this is the case, EA’s distinctive style brings new life to the most over-exploited of ideas.” Computer Gaming World 37, May 1987 (emphasis added).

These do not sound like the companies of today. One does not think of “rampant creativity” when they think of Activision. EA still makes games based on premises that have already been explored, but would you say that they have a distinctive style? I would argue that both companies are natural evolutions of their 1980s versions, and that they outlived their competition because of changes they made during the video game crash.

The video game crash

Gaming went through a rough period in the early eighties. The video game crash happened in 1983, and many companies active at the time didn’t make it. The crash primarily affected console makers and console game studios in the US, but it had ripple effects throughout the rest of the industry. Most of the companies we’ll be looking at today made PC games during the 80s, which did not take as much of a hit as console games. Still, you will see how the turbulence caused a need for change that some could weather, and others could not.

The Casualties


Epyx is one of the failed titans of the gaming industry. They had their heyday during the video game crash. They specialized in action games. One of their most popular games was called California Games. It was a different time. During this era, action games did not dominate the landscape like they do today. Most AAA games today would be considered action games then. During that era, adventure games had a larger foothold in the PC marketplace, and action games did better in the arcade and on consoles. Unfortunately for Epyx, the console market crashed for a few years.

Epyx’s California Games is one of the few Epyx titles still being sold

Epyx failed the way many of the companies in that era did: they failed to adapt. They made most of their money on the Commodore 64, and they refused to make games for Nintendo systems because they didn’t want to pay the licensing fees. They ended up in a situation where they wasted money on old systems, and when they did move forward, they made a deal to make games for the Atari Lynx. They ended up being overextended, and they had to file for bankruptcy. They sold off their properties piecemeal and have left virtually no footprint on the gaming landscape.


Infocom specialized in a type of game that doesn’t exist anymore: text-based adventures. They created Zork, which you may have heard of, and many others you haven’t. They relied on text-based games for their entire lifespan. They dominated in the PC market in the 70s. They survived the video game crash initially because they didn’t rely on the console market. In the 80s, though, their limited range started to hurt them. Graphical games were growing in popularity. Infocom’s solution? Marketing. They argued that graphics were overrated compared to the power of human imagination. For a while, people believed them.

Infocom’s campaign against graphics in games.

Their lack of range hurt Infocom, but their non-game efforts forced them to sell the company. Infocom tried to expand into business software while continuing in games. This meant that they dropped a huge amount of money into a database product called Cornerstone. Unfortunately, Cornerstone flopped, and they didn’t have enough money to keep the doors open long-term. They laid off half of their staff and sold the company to Activision. Activision would close the studio entirely not long after. Overall, the value of the Infocom library was not as high as it could have been. Zork remained a valuable property for years, but the rest of their adventures faded into obscurity. Their inability to adapt to the graphical era ended up ruining the brand in the long run.


MicroProse formed right before the video game crash. They specialized in PC games, though, so the crash didn’t force them out of business right away. They made simulation games and started a number of game series that are ongoing today. Sid Meier was one of three founders. MicroProse released XCOM and Civilization.

MicroProse’s most successful franchise, Civilization, outlived the company by decades.

Of the three dead titans, MicroProse’s library would transfer the best to modern day. They made many vehicle simulation games, like Solo Flight and F15 Strike. Vehicle simulation games thrive now as a niche market. Their other specialization, strategy games, has grown into a larger niche now than it was at the time. XCOM and Civilization, two strategy games MicroProse invented, have both gone on to be top games in the strategy genre. In fact, both titles are produced by Firaxis Games, a studio formed by former MicroProse members.

Ultimately, MicroProse fell prey to two major problems. The first is talent drain, a common theme among all these fallen companies. The most well-known man to leave MicroProse was Sid Meier. However, when Sid Meier left, he also left with other talented leaders and developers. The other thing that MicroProse failed in was diversification. Their niches weren’t large enough in the 80s and 90s to support their business, so they looked for other options. Bill Stealey, one of the founders, insisted on investing in arcade games. Sid Meier disagreed and ended up leaving over that disagreement. MicroProse’s arcade games failed, and they went public to pay back the debts that they had accrued in the arcade business. They limped along for a few decades as the team shrank and they produced fewer and fewer games. Firaxis, the company formed from former MicroProse talent, is alive and thriving to this day.


Atari wasn’t mentioned in the Titans of Gaming series, but it definitely fit the bill. Atari used to be synonymous with console gaming, and they also played a core role in the video game crash. They made the most successful gaming console of their generation: the Atari 2600. They popularized the gaming console. They started out as a game studio, making arcade ports and a few original games. Nolan Bushnell, Atari’s founder, wanted to build a console, but knew the company couldn’t afford to bring it to market. So, he sold the company to Warner Communications. The 2600 succeeded, but Bushnell and Warner had a falling out, so Warner let him go. Just five years later, they became to the main cause of the video game crash.

The Atari 2600 was synonymous with gaming for years.

They failed in many ways, but ultimately, they overextended and produced too many games that they couldn’t sell. They produced a poor port of Pac-Man for the 2600, and they printed more copies of the game than existing 2600s, expecting it to sell the console. They failed to sell all those copies, and so had to eat the manufacturing costs of printing all of those cartridges. Then they made what is widely cited as the worst game of all time: E.T. Atari rushed E.T. out the door with six weeks of development time. They failed to sell many of the two million copies they shipped.

All these things come back to a company that had become bloated, and who thought they could sell video games regardless of their quality. When the games market flooded, they had too many failed titles and excess stock to stay afloat. Warner Communications sold the company, and it looked like the end of gaming.

Fall of the Titans

Each of those companies was a titan in their heyday. Some still exist as brand names, like Atari and MicroProse, without any of the people or properties they used to have. Others are gone completely. What do they have in common?

1. Failure to adapt to the shifting market: Although Epyx, Atari, MicroProse and Infocom all made different types of games/consoles, they all fell apart within ten years of each other. The 80s and 90s were a period of rapid change in gaming, and these companies didn’t follow along with it.

2. Loss of original leadership: Most of these companies lost prominent members around this era. Atari lost its founder and MicroProse lost Sid Meier. The talent drain crippled their ability to adapt.

Now, let’s look at some of the survivors.

The survivors

Gaming had a lot of growing to do, and it bounced back better after the video games crash. However, most of the existing market was replaced. Most, but not all. I want to highlight some of the survivors of the video game crash and show why they are still winners today.


Activision is an old game company. They were, in fact, the first company to make third-party console games. Former Atari employees started the company in 1979. They made Atari 2600 games. And just a few years after they opened, the console market crashed. They pivoted a few times, first to PC games, then to PC software. They even acquired Infocom. None of it worked. However, a young businessman saw potential in the company, and you might recognize his name: Bobby Kotick.

Bobby Kotick has owned Activision for over 30 years 

Kotick saw the value of Activision for its brand name, so he bought it. Kotick led the company through bankruptcy and moved them back to basics. He focused on IP. He went through Activision’s back catalog and republished what he could, and he released sequels to things that worked, like Zork. Business trended back up.

They used their success to go on an acquisition spree. They bought many well-known companies, some of which they still hold. Raven Software, Infinity Ward and Treyarch are all acquisitions that Activision made in the late 90s/early 2000s and now all three work on Call of Duty. They expanded out across genre and platform, and succeeded as a business first, and a producer of games second.

Bobby Kotick saved Activision, and he is still at the helm. In many ways, they are still the same company. They use their IP and back-catalog on a level rivaled only by Nintendo. They do not hesitate to gut companies under them when they underperform. They are not the company they were pre-Kotick, known for “raw talent and rampant creativity.” Often, raw talent leaves Activision to form their own studios. However, they are a true titan of gaming.


EA was a company formed on the eve of the video game crash. They started in 1982, and they focused on home computer games, not console games. Unlike Activision, or Infocom, or Epyx, EA was not formed by a passionate group of game developers. It was the brainchild of Trip Hawkins, a former Marketing Director at Apple. He saw the business of games as being profitable, and so he formed a game publisher, not a developer. The EA formed then was not too dissimilar from the one today.

Sports games have been a cornerstone of EA’s business since the 80s

EA got its big break making sports games. To this day, sports games make up a large portion of their revenue. Sports games were competitive at the time, and EA gained an advantage through smart partnerships. First, they partnered with NBA players Dr. J and Larry Bird to make a one-on-one basketball game. The game succeeded, but not on the level of their next major sports title, Madden. They partnered with John Madden and worked closely with him to hone the gameplay. In both cases, EA used its partnerships to improve gameplay and to advertise the game. EA’s success throughout the video game crash came down to smart business decisions. They were ahead of their time, behaving more like a big tech company than an old-school game developer.


Nintendo wasn’t listed as a Titan in Gaming in the 80s, but they would definitely qualify. I won’t go into much detail about them here, because I already wrote another article on their president during this era. They succeeded by making an affordable console, and by having strong game IP for decades. Their president had a strong hand in both.

The new titans

How would the titans in gaming list look today? Well, we already know two of the names. To keep with the spirit of the CGW list, we will ignore the console makers. These are the five most important game publishers today.

1. Tencent

2. NetEase

3. EA

4. Activision

5. Bandai Namco

Soon, I’ll cover what makes these the new titans in gaming.


The Ultimate History Of Video Games Revisited (retrieved from

Computer Gaming World Issues 36–41

CGW Issue 53 (on Activision)

Recent Posts

Articles Blog

The real story behind the Activision-Blizzard acquisition drama

Sony has a lot to fear from the Activision-Blizzard acquisition, and it has little to do with Call of Duty

A business move has dominated gaming news for the last month. Not new game announcements, or a new console, or tech or a service. We’ve been caught up in the drama around a business deal and the fallout from it. We’ve been caught up in a new saga of Xbox vs PlayStation, with Activision-Blizzard as the newest wrinkle. Microsoft offered almost $70 billion buy Activision-Blizzard in January of 2022. Activision-Blizzard is made up of three game publishers: Activision, Blizzard and King. Activision has been one of the biggest and most consistent game publishers for over thirty years. They own Call of Duty, the biggest console-gaming franchise in the world. King dominates on mobile. You may have heard of Candy Crush, which King produces. Candy Crush has been a top-ten grossing mobile game for years running. Blizzard rounds out the three. Blizzard has had a rough few years, but they still own World of Warcraft, Diablo and Overwatch. Overall, Activision-Blizzard owns the most valuable third-party game catalog in the world. Microsoft wants to own that.

Microsoft may have jumped the gun a little bit on their promo graphics.

Blizzard accepted Microsoft’s deal. Microsoft announced the deal with fanfare and talked about all the things they would do with the Activision-Blizzard catalog. Not all were pleased.

Government Intervention

The deal has not gone off as smoothly as Microsoft had hoped. Microsoft is a global company, and so they are subject to scrutiny in many countries. Some countries, like Brazil, have already cleared the deal. However, others are holding it up. Most of the news around the acquisition right now centers on the Competition and Markets Authority (CMA) in the United Kingdom. The CMA won’t approve the acquisition until they finish their investigation. Microsoft is taking the CMA seriously. The CMA recently required Meta to sell Giphy, a GIF sharing website it owned. They reasoned that Meta could restrict access to GIFs from other social media sites, gaining an uncompetitive advantage. They hold the power to cancel the Activision-Blizzard acquisition as well.

The CMA building in London

The CMA has called into question a lot of things regarding Microsoft and Sony’s position in gaming. Why Sony? Sony has been talking openly to the CMA about the perceived harm that Microsoft would inflict on the industry. They brought up the power of Call of Duty, and called it an “essential game”. The CMA entertained that argument. Microsoft countered by bringing up Nintendo’s success. The Nintendo Switch doing great without even having Call of Duty on their platform. Then, Sony pivoted, and brought up the danger of Microsoft making Activision-Blizzard games exclusive. They stated with console exclusivity, and then pivoted again to the dangers of Microsoft putting Call of Duty on Game Pass. Microsoft CEO Satya Nadella argued that Xbox is in third place in the console wars. Sony, in his estimation, is winning. “So if this is about competition, let us have competition,” Nadella said.

All of this can come off as childish. Maybe Sony is being a sore loser, or Microsoft is deliberately misrepresenting their strength as a company to get the acquisition to go through. It runs deeper than that, and the arguments to this are revealing a lot about Microsoft and Sony’s business models and views on gaming.

Microsoft’s Long-Term Strategy with Xbox

Xbox is prioritizing the cloud. Photo by Muha Ajjan on Unsplash

Xbox struggled during the Xbox One era. The Xbox One underperformed in sales, losing both to the PlayStation 4 and the Nintendo Switch. Microsoft had to cancel or delay big exclusives like Scalebound and Halo Infinite, and the exclusives that did come out didn’t compete well with PlayStation exclusives. With all this in mind, Microsoft decided to shift their focus. They stopped focusing on selling consoles, and instead focused on building the Xbox brand outside of consoles

They created Game Pass, a game subscription service similar to Netflix. They moved in 2018 to put all their first-party games on Game Pass day one. At the time, you could buy a year’s subscription to Game Pass for $60. For the price of one game, you could have access to every new Xbox release and a library of other games, too. They launched Game Pass on PC the next year, at the same introductory price point. Microsoft decided to make a long-term investment. They pivoted away from the traditional console strategy. 

Nintendo sells their first party games for full price years after release. Mario Kart 8: Deluxe has been a top-10 seller for boxed games since its release in 2017 and it hasn’t seen a price decrease. PlayStation games don’t hold that much value, but as we’ll talk about soon, they still rely on first-party AAA game sales at $70 apiece. They’ve released a few first-party games in the top-ten grossing for the year. Microsoft has released zero. Microsoft still sells their first-party games standalone, but they have shown that they don’t plan to win in gaming the same way as Sony or Nintendo.

Microsoft is trying to play the long game on the console wars. The Xbox Series S has the worst specs of current generation consoles, but it runs Game Pass games just fine. With cloud streaming, Microsoft could also extend the life of the Series S beyond when its hardware falls too far behind. They sell both the Series X and the Series S at a loss, but the Series S is sold at a higher loss. With chip shortages, both Microsoft and Sony have a hard time producing enough premium consoles to meet demand. The Series S gives Microsoft another way to push their platform even when they face supply constraints. 

Xbox as a platform makes most of its revenue digitally. AAA games sales don’t matter as much to Microsoft as they do to Sony. However, they do care about subscription revenue. The Brazilian investigation of the acquisition forced Microsoft and Sony to report how much they make from subscription services. Microsoft made approximately 20% of their revenue on Game Pass alone. The exact number for PlayStation Plus hasn’t been released publicly, but we know that it is lower. These numbers only take console data into account, so Game Pass is outperforming PlayStation Plus by even more if you take PC into account.

In this light, the Activision-Blizzard acquisition is just another step in a long-term plan. Microsoft could put Activision-Blizzard games on Game Pass and get their subscription revenues up. They could also sell the games on other platforms, like they already do with Bethesda. Over time, the percentage of their revenue that they make selling Xbox consoles and games will drop, and they will make more money from subscriptions and sales on other platforms.

Sony’s strategy with PlayStation

Photo by Kerde Severin on Unsplash

Sony’s strategy with PlayStation is closer to the traditional console strategy. They make more money than Microsoft on console sales, and they make more money on AAA game sales. They also make more money on boxed game sales than Microsoft. They have higher revenue and profit from gaming right now than Microsoft does. 

For these reasons, Sony wants to maintain the status quo in gaming. As long as consumers are buying their AAA games for $70 and buying them multiple times a year, Sony will maintain and build on their lead. They have more first- and second- party AAA IP, and these games have sold better than Xbox exclusives. They lose these advantages if they have to follow Microsoft. They don’t have enough IP or cash-flow to buy IP to make PlayStation Plus compete with Game Pass on value. If they started releasing games on PC the same day as on PlayStation, they would cannibalize their console sales and their full-price AAA game sales. They cannot compete on infrastructure, either, so cloud gaming and subscription services are not going to be as viable for them.

The Sticking Point: Game Subscriptions and Cloud Gaming

Photo by C Dustin on Unsplash

All of this comes back to cloud gaming and subscription services. The CMA agrees. The two points of competition that they keep coming back to are cloud gaming and subscription services. Here’s how they laid it out in their issues statement

“The Merger gave rise to a realistic prospect of an SLC [substantial lessening of competition] as a result of vertical effects arising from:

A: Microsoft withholding or degrading Activision’s content — including popular games such as CoD [Call of Duty] — from other consoles or multi-game subscription services; and

B: Microsoft leveraging its broader ecosystem together with Activision’s game catalogue to strengthen network effects, raise barriers to entry and ultimately foreclose rivals in cloud gaming services.”

They worry that Microsoft could leverage Activision content to force other multi-game subscriptions and cloud gaming services out of business.

Microsoft created a rebuttal, arguing against both points. On cloud gaming, they argued that the technology is still new, and that they only advance cloud gaming in general through innovation. On multi-game subscriptions, they argued that “Multi-game subscriptions are a means of payment — not a market.” You can find the full arguments here, as they are too extensive to quote.

Whether subscriptions are an innovation or just a different means of pay, they are changing the gaming landscape as we know it. Microsoft is hitching their future in gaming to Game Pass. Sony sees that as a threat, and so they are trying to convince the CMA to shut the acquisition down. If the deal goes through, we will see if Sony adapts to the subscription landscape, or if the gaming ecosystem becomes more stratified. We could see a world where Xbox, PlayStation and Nintendo do not directly compete with one another at all in gaming.

Recent Posts

Articles Blog

Nintendo’s Godfather: Winners in Gaming 2

“I tell people that ‘entertainment is valuable when it is different from other entertainment,’ and these are Yamauchi’s words. It was Yamauchi who laid the foundation of our universal way of thinking and the foundation of Nintendo today.” — Current Nintendo president Shuntaro Furukawa

Hiroshi Yamauchi, Third President of Nintendo

Nintendo has only had three presidents since it became a gaming company. They’ve been in gaming for about 80 years, and one man sat at the helm for 60 of those 80 years. That man was Hiroshi Yamauchi, the godfather of Nintendo. He didn’t follow the typical path, but he exemplifies winning in gaming. Most of the influential figures in gaming history are passionate creators. You have the John Carmacs, the Shigeru Miyamotos, even the Vince Zampellas. Yamauchi does not fit this mold. He was not an engineer, and he knew nothing about making games. He didn’t even play video games. He preferred to play Go. He was, however, one of the biggest winners gaming has ever seen.

Yamauchi brought Nintendo into gaming. When he started, they made playing cards. He shepherded them from playing cards to the move into games all the way into the GameCube era. He led the company for almost 60 years, which blows other industry leaders out of the water. His actions often hid behind the scenes, but you can see his impact. Nintendo nearly went out of business multiple times early in his tenure, but he crafted them into one of the biggest juggernauts in gaming, and the largest gaming company in Japan. He made Nintendo what it is today.

This is the second post in the Winner’s In Gaming series. If you enjoy it, check out the first one on Phil Spencer.

He learned to deal with adversity early

Yamauchi’s young life was rough. His father left him and his mother when he was young. His mother gave him up to her parents. His grandfather, Sekiryo Kaneda, owned Nintendo. Yamauchi had a strict upbringing: he went to prep school, and then law school. He didn’t even get to finish law school. His grandfather asked him to take over Nintendo when he was only 21. This sort of upbringing sets him apart from others at his level. He didn’t have a technical education, he didn’t grow up playing games, and he didn’t have a passion for tech. He took charge of Nintendo because of his grandfather’s failing health.

Yamauchi was a hard man

Yamauchi had one condition for taking over Nintendo: his grandfather had to fire every other family member who worked there. He got his own cousin fired and took over the company. Immediately, the factory workers went on strike. Yamauchi fired them all. He even set up multiple R&D departments that directly competed against one another. He was a hard man.

These things are at odds with how we see Nintendo from the outside. They made playing cards, then toys, and finally video games. Their mascot is Mario, who always smiles. They appeal to all ages. Yet, Yamauchi set all of these things in place. He was the one who moved Nintendo into toys and into gaming, and he insisted that they be a family-friendly company.

Yamauchi had a hard upbringing, and he saw Nintendo as a business, not a passion project. In many ways, he resembles a similar figure: Walt Disney. Disney was seen as tyrannical to many, and he also insisted on his company being family-oriented. Yamauchi pushed for success first. He put the company first. In his mind, making video games was just the best way to achieve that.

There were some choices he made that people wouldn’t agree with today. He was considered a tyrant. He worked his team hard. He even set up his teams in competition with one another. He had two R&D teams competing for funding. That work environment sounds difficult, maybe even toxic. However, employees of his reported that they thought he was a good boss. He resembles Steve Jobs, Bill Gates and other tech giants. He had high expectations and saw markets well ahead of their time.

He created the NES

Photo by Jason Leung on Unsplash

Yamauchi was ahead of the curve on game consoles. Nintendo didn’t release the first gaming console, but Yamauchi’s approach was unique. He wanted something other companies couldn’t copy for at least a year, but at the same time something so cheap almost everyone could buy it. He wanted to make a console that they could sell reliably for years.

He saw the hardware as a means to an end, and that set the tone for Nintendo from their beginnings as a game company until today. His vision for the NES was to make a cheap system that was easy to program for. In a lot of ways, he resembles an early Bill Gates. He was more interested in getting Nintendo consoles with games into people’s homes than with turning a large profit on a per-console basis.

He focused on games, not hardware

Yamauchi focused on games first, setting the tone for Nintendo as a company. He personally approved every game that Nintendo released for years. He didn’t even play games, but he had a great idea of what would sell in the market. He picked out the people who would make them, too. He hired Shigeru Miyamoto. He gave Miyamoto a chance on Donkey Kong, Miyamoto’s pet project. Others saw Miyamoto as a dreamer without business sense, but Yamauchi saw the talent in him. Donkey Kong sold great, and Miyamoto would go on to create Super Mario Bros., Zelda and other greats.

Photo by Cláudio Luiz Castro on Unsplash

Yamauchi valued his developers as creators, not for their technical skills. From the book Game Over by David Sheff: “Nintendo would, Yamauchi decided, become a haven for video-game artists, for it was artists, not technicians, who made great games.” Even though he made the final decision on whether to ship a game, he still valued talent, and he trusted his team enough to have them make innovative games.

He left the company on good terms

Yamauchi held the office of president at Nintendo for 53 years, but not on purpose. He wanted to leave around 1996, but didn’t until 2002 because he failed to find a good successor. Not surprising, because he had such a close hand on the company that he approved each game. He didn’t know if the company could handle itself without him.

After that, he stayed on the board for a number of years until Satoru Iwata became the CEO. At that point, he felt the company was in good hands. He didn’t even draw a pension, despite his being worth millions, because he felt he had enough with his equity in the company. He wanted the money to be put to good use in the company. It really shows his character. He cared more about Nintendo as a company than most people would.

Yamauchi created the most dominant gaming company in the world, and he did it by his own vision. He took risks. Under his leadership, Nintendo almost went out of business many times. He presided over the rise of Nintendo, and personally picked many of the current leaders in the company. He is undoubtedly one of the greatest winners in gaming.

Sources:, ·,

Recent Posts


The Horizon Zero Dawn Remaster Makes Sense (to Sony at least)

A document leaked a couple of days ago that covered upcoming PlayStation releases. It listed a remake for Horizon Zero Dawn. Sony has gotten into the habit of remaking games recently. The Last of Us remake already seemed a little strange given how recent that game was (the Last of Us released in 2013, and they remastered it the first time in 2014). Horizon Zero Dawn came out in 2017, and it got an upgrade on PS5 already . Why does it need a remaster? Because Sony is using it for cross-platform promotion.

The Last of Us remake came out right before they released the first trailer for the Last of Us show. They released the Uncharted: Legacy of Thieves collection one month before the Uncharted movie launched. They are coordinating their releases of games and other media to cross-promote one another. They want to have a Horizon game closer to the release of either the show on Netflix or to the promotional blitz that will come for the show. The next full game in the series would miss the launch of the show. They also seem to be prioritizing releasing a game that is covering the same materiel as the adaptation. For The Last of Us, they remade the first game, which is the game that the show is adapting. They did a similar thing with Uncharted.

Other companies have found success with this model. Most recently, Cyberpunk 2077 hit its biggest player count since its launch. They released new content at the same time that the show launched, and their sales and play numbers skyrocketed. This strategy will become more common in the industry as publishers get better at adapting their IP.

Recent Posts


Stadia is shutting down

But the tech will live on

Google announced that they are shutting down Stadia at the beginning of next year. This came as a surprise to game studios still making games for stadia and no one else. Stadia has not been doing well for a long time. Many considered it dead on arrival. The writing was on the wall as early as 2021, when Google shut down their internal game studios.

Stadia did not fail because of the streaming service itself. The technology works. It is a competent streaming service. I talked about this in my blog post about the cloud gaming wars, but the business model really let Stadia down. Playing games on the service works, but it has a poor catalog, and you have to buy your games individually. It’s positioned to compete with consoles, and so it loses to other cloud gaming services on value and on IP. Why would you spend on a gaming library on Stadia when you could get all of Game Pass for $25 a month? Even Amazon Luna is a better value than Stadia.

Because the tech behind Stadia is good, it still has a future. Google spun off Stadia’s B2B service as Immersive Stream for Games. They licensed the tech out to Capcom to fuel a browser-based Resident Evil: Village demo. They could always sell the technology that way. If any other company wants to get into cloud gaming, like Nintendo, Google could sell them Stadia as a technical solution. They could also just sell smaller, browser-based solutions to companies. Either way, Stadia as a service is dead, but the tech behind it will live on as Immersive Stream for Games.  

Recent Posts