Case Study

What if you could measure the gap between what brands say and what the market thinks?

By Mike Litman · Strategy Director · London · February 2026

I spent 15 years in advertising watching brands tell one story to the public while the market told an entirely different one. Nike launches a campaign that breaks the internet -- stock dips. A pharma company nobody follows quietly doubles its market cap. The cultural conversation and the business reality almost never align. I always wanted to measure that gap. So I built a tool that does.

sociologyofcapitalism.com tracks 1,208 brands every day. An automated pipeline scores each one on cultural relevance and business performance, then calculates the distance between them. That distance is what I call the Tension Index -- and it reveals things that neither metric shows alone.

The catch? I'm not a developer. I've never written code professionally. I built the entire thing -- data pipeline, scoring system, frontend, deployment, automation -- using Claude Code as my engineering partner. Every line of Python, every HTML template, every GitHub Action. This is the story of how.


Every brand is telling two stories at once.

After 15 years as a Strategy Director at independent and creative agencies, I've sat in hundreds of rooms where the same question comes up: Is this brand actually doing as well as people think? Sometimes the answer is obvious. Usually it isn't.

Cultural relevance and business performance are related, but they're not the same thing. A brand can dominate social feeds and be haemorrhaging money. Another can be invisible to culture and printing cash. The interesting ones -- the ones worth watching -- are the ones where those two stories diverge the most.

That divergence is tension. And tension, in my experience, is where every interesting brand story begins. It's the moment before a pivot, a crisis, a breakthrough, or a reckoning. I wanted to see it in data, not just feel it in briefings.

"The gap between what the culture believes about a brand and what the market believes is where every interesting story lives. I just wanted to see it measured."

I'd been building projects with Claude Code for months -- a culture aggregator, a brand taste scoring tool, a generative art project. Each one taught me what was possible. But this idea kept nagging at me: what if I could build something that actually tracked this tension at scale, with real data, updated automatically? Not a one-off analysis. A living system.


A pipeline that runs while I sleep.

The system has four layers, and each one feeds the next. Twice a day, a GitHub Actions workflow triggers a Python pipeline that pulls data from multiple sources, scores every brand, generates AI-powered analysis, builds static HTML pages, and deploys the whole thing to Netlify. I don't touch it. It just runs.

STEP 01

Data Collection

Wikipedia pageview API pulls 7-day and 90-day view counts for every brand. Google News is scraped for article volume and headline text. Yahoo Finance provides stock price, market cap, revenue growth, and profit margins for all 245 public companies. Each source tells part of the story.

STEP 02

AI Sentiment Analysis

Every brand's latest headlines are fed into Claude for sentiment analysis. Not just positive/negative -- the AI identifies narrative themes, detects momentum shifts, and writes a one-line "tension narrative" for each brand. This is the qualitative layer that raw numbers can't capture.

STEP 03

Scoring Engine

Cultural score (0-100) combines Wikipedia momentum, news volume, and AI sentiment. Business score (0-100) combines stock performance, market cap tier, revenue trajectory, and profit margins. Private companies get a proxy business score derived from news signals and Wikipedia presence. The Tension Index is the absolute gap between the two.

STEP 04

Build & Deploy

Python generates 1,208 individual brand pages, sector analysis pages, a weekly digest, a comparison tool, embeddable widgets, a JSON API, an RSS feed, and the main index -- then deploys the full static site to Netlify via API. Around 3,700 files, rebuilt from scratch twice daily.

The whole pipeline runs in about 8 minutes. The Anthropic API calls (for sentiment analysis across 1,208 brands) cost roughly a dollar a day. Everything else -- hosting, automation, data sources -- is free tier. The total running cost of a brand intelligence platform tracking over a thousand companies is less than my morning hot chocolate.


From 74 brands to 1,208. What broke. What worked.

The first version tracked 74 brands. Mostly the ones I found interesting from my advertising career -- Nike, Tesla, Patagonia, the usual suspects. It worked. The data was clean, the scores made sense, and the tension concept held up. So I pushed it further.

Going from 74 to 1,208 brands broke almost everything. API rate limits hit. The Wikipedia pageview API started throttling. News scraping got unreliable at scale. The build time ballooned. Individual brand pages meant the deployment payload exploded. Every assumption I'd made at small scale needed rethinking.

The fixes were mostly architectural. Batch processing with retry logic for API calls. Caching layers so a single failed data source doesn't tank the whole run. Proxy business scores for the 963 private companies that don't have stock tickers -- derived from normalised news volume, sentiment trends, and Wikipedia momentum as a stand-in for public market data. Incremental deployment that only uploads changed files instead of the entire site.

Each problem was a conversation with Claude Code. I'd describe what was breaking, it would suggest an approach, I'd push back on parts that didn't make sense to me, and together we'd find something that worked. The rhythm felt more like directing a team than writing code -- which, it turns out, is exactly the skill set I already had.


The system today.

1,208
Brands Tracked
15
Sectors
245
Public Companies
~3,700
Files Deployed

Every one of those brands gets a dedicated page with cultural score, business score, tension index, AI-generated narrative, score history sparkline, archetype classification, and (for public companies) live stock data. The homepage features a real-time tension scatter plot, sector heatmap, daily AI insight, and a movers feed showing which brands are gaining or losing tension.

Features that shipped along the way:

All of it generated automatically. All of it rebuilt from live data twice a day. The site doesn't have a CMS. There's no admin panel. The pipeline is the CMS.


Deliberately boring technology.

Pipeline

Python

~2,800 lines. Data collection, scoring, AI analysis, HTML generation. Four scripts that do everything.

AI

Claude (Anthropic)

Sentiment analysis, narrative generation, daily insights. ~$1/day for 1,208 brands.

Frontend

Vanilla HTML/CSS/JS

No framework. No build step. No npm. Static files that just work. Fast everywhere.

Automation

GitHub Actions

Twice-daily cron. Triggers pipeline, commits data, deploys. Zero manual intervention.

Hosting

Netlify

Free tier. Static site deployment via API. Global CDN. No server to manage.

Built With

Claude Code

Every line of code, every architectural decision, every debugging session. My engineering partner.

There's a reason everything here is simple. I'm not a developer optimising for developer experience. I'm a strategist who needs things to work, be readable, and be fixable at 11pm when something breaks. No dependencies to update. No framework migrations. No build pipeline for the build pipeline.

Vanilla HTML means I can open any file, read it, understand it, and fix it. When the site breaks -- and it has, many times -- the debugging process is "read the file, find the problem, fix the file." That's it. I've come to believe that simplicity isn't a compromise. It's a competitive advantage.


What building taught me that strategy never could.

Fifteen years of strategy work trains you to think in frameworks, identify patterns, and tell stories about why things matter. That's valuable. But it also trains you to stop before the hard part -- the part where you actually make the thing.

Building this project taught me three things I couldn't have learned any other way:

Data is messier than any brief. Real APIs fail. Wikipedia's pageview data has gaps. Stock tickers change. Companies go private. News coverage varies wildly by region. Every data source has opinions baked into its structure. The scoring methodology has been rewritten four times because real data kept revealing assumptions I didn't know I was making.

Scale reveals design. A page that works for 74 brands doesn't work for 1,208. The information architecture had to change. Filters, search, heatmaps, scatter plots -- all of that emerged because the data demanded it at scale. You don't know what you need to build until you've built enough to see the shape of the problem.

Shipping is thinking. The Tension Index concept only became sharp because I had to implement it. The act of writing the scoring formula forced precision that a strategy deck would never require. "Cultural relevance" is vague in a presentation. It's specific when you're deciding whether Wikipedia pageviews should be weighted 30% or 40% of the total score.

"I've been in the room when strategy decks are presented. Now I've been in the room when the data pipeline fails at 3am. The second one teaches you more."

Where this goes from here.

The Tension Index is a proof of concept that works at production scale. But it's still the beginning. The data model can absorb more signals -- social media volume, job postings, patent filings, app store rankings. Each new data source adds a dimension to the tension calculation.

The API is already public. Anyone can pull the full dataset as JSON and build on top of it. I'd love to see analysts, journalists, and brand teams using this as a starting point for their own work. The embed widgets make it trivial to drop a brand's tension score into any article or report.

Where this gets really interesting is historical. Right now, the system shows daily snapshots. With enough historical data, it becomes possible to identify patterns -- the tension signature before a brand crisis, the cultural leading indicators of a revenue shift, the moment when market sentiment and cultural conversation diverge in a way that predicts something specific.

That's the long game. Not just measuring tension, but making it predictive.


The best strategists don't just think. They make.

I built this because I believe strategy is better when it touches reality. Not the reality of a focus group or a brand tracker report, but the reality of live data, real infrastructure, and systems that have to work every day without human intervention.

I'm a Strategy Director with 15 years in advertising. I've led teams, pitched clients, shaped positioning for brands people actually use. That hasn't changed. What's changed is that I can now also build the tools that make strategy visible, measurable, and ongoing.

The question I keep coming back to is this: what happens when someone who understands brands and culture also has the ability to build products? Not as a side project. Not as a hobby. As a way of working.

This project is one answer. There are others. But they all start from the same place -- a refusal to stop at the deck.

Explore the Data

See the Tension Index in action.

Back to the Tension Index
API · Compare Brands · About the Methodology · mikelitman.me