Why this post?
I started thinking about this last year, put it off since February maybe. Stumbled on this Bizarro Devs newsletter this Thursday, went to note down the link… My note was ten paragraphs long. So here I am.
But I do worry about how it will land. I don’t want people to think I’m targeting GenAI advocates or users. I don’t want to be that guy. The enemy-of-progress guy, the lack-of-vision guy…
But there’s another character in every innovation story: the tester, the reviewer. The “we have effects now, but take your time” guy. The “cars are great, but people are squishy – make seatbelts and speed limits” lady.
Good timber does not grow with ease:
douglas malloch, ‘good timber’
The stronger wind, the stronger trees;
The further sky, the greater length;
The more the storm, the more the strength.
The software world celebrates hackers and researchers. Without these ‘red teams’, apps would be buggy. AI companies have red-team developers, but they can label red-team feedback as persecution or shortsightedness. Both types of friction do the same thing: make a more robust and useful product. Without either type, any growth may be warped.
What’s the plan?
I’ll share concerns I haven’t noticed in my local space, hoping for more nuanced chats. I’m not a tech person, just tech-adjacent. I can only take a broad view – but that is quite useful. When the big picture and the inside perspective clash on a project, the detail is more likely to change. Often though, the two can coexist.
I’ll focus on the creative sector, but steer away from software development. That field is all-in on GenAI, but their experience is quite different. They self-host and hack their tools. Also, code is not a final product.
Sidenote: using GenAI utilities, as a single step plugged in to your workflow, is not the focus either. (That’s how most programmers use it, I think.) I mostly use utilities myself… The issues do extend to this area, they are just harder to trace here.
I’ll generally assume that GenAI will do everything that has been promised. I often project things this way: what are the worst sides to the best-case scenario?
I still moonlight as a writer, and academic work helped my thought process. (Just did Economics, which should come in handy.) But a formal, citation-heavy essay feels like the wrong tone. I’ll use fewer stats, and focus on things that make common sense. May add a small list of links at the bottom though.
Please feel free to fact-check and respond to anything. There’s a comment box at the bottom.
What’s the mood?
I want to have a sincere conversation. I won’t pretend to be neutral – I’d prefer a creative world where GenAI was just a utility or novelty, not a competitive necessity. I’m not trying to ambush you.
I’ll say things that can be labelled as ‘anti-AI’ talk, or tagged to a political view. Remember, nuance takes in every angle. Also, comments are open – for real.
It’s going to be long… sorry
If you’re like me, you wouldn’t mind if this was a video essay… I could break this into multiple posts, I guess? But I want to follow the thread through – if you need to go whenever, have a blessed day.
Okay. “Take a deep breath. Work from the beginning.”
Where is GenAI supposed to help?
Knowledge work
It should minimize routine tasks for office workers, so we can focus on executive function… But we often use the mundane work to refine the idea – for me, the plan depends on how each step goes. Maybe the same for you?
GenAI advocates note this by saying, “Each generation of tools has reshaped the journey.” Very true. But no tool ever offered to eliminate the journey. Many GenAI users are now using it to ‘jumpstart the project’. In other words, the human is a passenger from the outset, even if they have the map. That is never the most enjoyable way to collaborate, is it? Nobody has the best time.
Actually, if collaboration was easy, you could solve most project blocks by talking to somebody.
I think GenAI has helped my collaborative skills. Accepting a partner’s style or flaws. Politeness and patience. Giving clear, yet positive feedback. We could call it ‘MayI’.
Otherwise you could brainstorm; take a walk; call it a day, come back later with fresh eyes… Or could you? Maybe the hustle just won’t wait.
Few workers are reporting that GenAI gives them more breathing space. As it increases our efficiency, the time just disappears. Either our in-trays are expanding, or else procrastination and burnout are flooding the space. Has your quality of work really benefited?
I can’t say mine has. So far I have only published code from GenAI, but already clients are assuming that I would (making them more demanding) or they do themselves (which complicates the process.)
Please note: a generated brief does not inspire confidence.
Creative work
Creative work is knowledge work, so the points overlap. But GenAI can bring a lot more people into the creative economy by lowering the skills barrier. To a more experienced producer though, the barrier is scale - now, if you have the plan, you have infinite creatives for it.
This means GenAI-powered teams need fewer support staff – possibly everything beneath art director, but likely all entry roles. In turn, companies need fewer teams, fewer agencies… Basically, now dominant firms can gatekeep more effectively – with careful monitoring, any small creatives’ work can be used as ‘reference’ before they can leverage it themselves.
General creativity
We should use it for more – maybe even better – ideas. But its most original leaps are called hallucinations, and engineers are trying to reduce those quirks.
Sidenote: it looks like our divergent thinking, but it’s data complexity, not contextual. Your emotions and environment affect your thinking in real time. GenAI uses multidimensional connections, but there’s a ‘Last Modified’ date.
By definition though, GenAI either delivers the ideas that it can do best, or the most popular ones. It even slows down the turnover rate of any trend. No tool will know a fad is ‘over’ until its next major release.
Social creativity
“You too can be an artist!” We’ve heard this a lot. But actually, GenAI just gives you the client experience… at best, a taste of a manager’s power.
Generally, this era has seen actual making time reduce. Even physical-media artists do less making, more talking about making. Most big creatives do more management and client relations, less production. GenAI advocates suggest that we can all pivot the same way… But executives and directors rarely get to do the easter eggs.
Unless you’re Marvel, where every easter egg can become a world war.
Look at it this way: trained creatives are in the minority around the world. If ‘non-creatives’ only participate by sharing generated content, each generation of GenAI will have its signature on media archives for those years. That doesn’t sound like a net positive to me.
Sidenote: A lot of the input and direction issues would be better if we all had personal models that reflect our styles. But I don’t yet have the chops to install a LLM – and most users don’t even wish to try. Big agencies and publishers are doing this though… another advantage for them.
Follow-up: Exactly.ai is a service personalizing models for image generation. If you need more than 5 free daily credits, you’re looking at £240 a month. Maybe take a training course instead…
education
It is supposed to supercharge learning in every way, with interactive and tailored pathways. Have you gotten to try any of these tools? They are less likely to be free, maybe because learning tools have to meet higher standards…
We need AI to support learning journeys, without removing the frictions that are so key to deep understanding. We need it to be agentic, but strongly aligned, but bound to syllabus, but very responsive to user feedback and interest… That is still a wishlist for now.
Human augmentation
There is some debate about this: should AI do this? Would it be safe? But when we watch a speech-impaired person regain their voice through a digital clone – hard to argu with that feel-good factor.
It is very exciting that GenAI is already helping people with disabilities to express themselves in new ways. But we have to make sure that these tools don’t pinch them elsewhere. So companies in this field have to meet very high standards – and maybe the public benefit should influence the price?
Most of that benefit is in the future though: this isn’t the AI that is “here to stay” right now. What we generally have is AI that overcomes technical limitations of skill/scale – general and code writing, and music, image and video production.
I didn’t say “creative skill”, because we already looked at that. Also, that word could deceive us here. We need to disentangle impairment from inexperience – difficult task, but it’s the only way to make sure we aren’t just outsourcing our potential.
Learning and physical difficulties show up in tangible and measurable ways. (For example, serious tone-deafness affects how you understand speech.) If you are blessed not to have impairment like that, please… question your lack of creativity.
If you can tell lies and have strange dreams, or use words without checking their definitions – then your creative impulse is alive and working. True, exploring and training it will take time and patience. But it always pays better than having it done for you.
Long sidenote: we made creativity frustrating, On purpose
Human creativity is always grasping for the new – new ideas, new processes, new experiences. Our ancestors’ music was mostly about rhythm. (Body percussion is still a great way to start your musical journey.) But then we added melodies, then stacked harmonies, then we decided that tonality is actually kind of boring…
Tools were lowering the barriers to entry before GenAI – digital media especially. But art still takes time, because when a thing becomes easy, it loses value as an end product. We mess with it, combining and remixing until we hit new limitations… Or we might decide that it’s no longer worth doing. (A perfect circle is only impressive if you drew it freehand.)
Simply put, we like the process.
This is a big reason why the creative world is so intimidating to ‘those on the outside’. Especially now that our algorithms don’t let us see enough casual, unpolished art – which is actually most of the art out there.
Practically speaking, most people can learn enough to contribute on their first day, let alone two weeks. Most people don’t stop there – but you could totally add value to a community experience with that.
It is comparison that stops us from diving in. So fight that. And also, fight the urge to rate the experience by its efficiency – that is an industrial mindset. Even as a career creative, I resist that.
Where does GenAI really help?
Customer relations
If you’re running a consumer business, customer relations is a real head-ache. But now, you can be responsive 24 – 7, while handling all requests by your chosen playbook.
Human customer reps have trouble staying on-strategy: if they focus on being relatable, they can go and apologize for something that you never apologize for… Current chatbots already do better at sticking to the script. Even if they aren’t too good at diagnosing and solving problems, they can deflect and pacify all day long. Humans soon start complaining of burnout.
Corporate media and publishing
In the first section we looked at how GenAI empowers dominant publishers. Human resource issues have long stopped big firms from capturing more market share in the attention economy. But now!
Now game studios can stop treating production workers like robots – and use actual robots. Book publishers can stop contracting for hack work, and let their senior editors generate airport paperbacks directly. Streaming services can keep you listening without having to lock young influencers in crazy contracts. (The big record labels will let them do it, because they are shareholders. They will even help by ‘signing’ digital avatars.)
There will be much less exploitation of small creatives, basically. We should all rejoice.
I had planned to avoid sarcasm, but I can’t help myself here. Sorry.
Sidenote: I still use (and make) static mockups… but every time I do, I feel silly. Who will be buying mockups and stock images in three years?
Popular independent producers
Independents can also extend their production capacity. This includes well-known creatives, but also sports people, and even platform-famous people. As GenAI grows, categories are becoming irrelevant. Soon athletes will be serving us art content, and artists will be doing stunts in their videos.
Content powerhouses should probably research attention overload, so they know where to stop – because one A‑lister can anchor all their content, in every language, on every platform, fifty times a day.
Sidenote: if the Exactly.ai concept takes off, attention overload will be the only mechanism creating work for new commercial artists. One model blows up, features on a bunch of blockbusters, then fades away so others can show.
In this context, collaborations make less economic sense. The only lure would be a niche audience that hasn’t yet been exposed to the star. Camera-shy artists can expect even fewer gigs. A feature could become a moral thing, like corporate social outreach.
It’s hard to sell out if nobody is even buying.
Another interesting thing: once the status quo is digital, and most influencer content is generated, in-person engagements will cost much more. Maybe ‘featured creators’ could even charge for DM access.
Content mills
This is a weird problem. And a big one, too. We usually look at barriers to production like a bad thing, but at the same time we use them as a filtering mechanism. When a craft has high standards, it mainly attracts people who respect what it stands for.
But now we have services that claim to deliver finished products “at the click of a button!”
I think my interests deserve more effort. Three buttons at least.
We are crying about it in our media space, but software people have it worse, if any of the app-building ads I’m seeing are legit. In total, everyday, thousands of people are publishing apps and media without even a casual quality check.
We talk about how this is bad for business, for politics, for community. It turns out content mills are also biting the hand that fed them, and corrupting GenAI itself. (Keep that in mind. We’ll come back to it.)
Why are people doing this? Because attention powers the digital economy. Even if a ‘breaking news’ video or knockoff app only gets two weeks of traction before it is banned, it can really pay off. Whether they are looking for advertising clicks, or political clout – as long as the costs are so low, they’d be dumb not to try.
And if the content platforms make it impossible to publish AI slop, they also lose out in attention metrics. They haven’t dared yet – so GenAI takes the fall.
Social media platforms
We used to call them social networks, when they were all about connecting us together… But humans have such a low refresh rate. Even if we want to keep posting and scrolling, we just burn out.
Now platforms have started suggesting content for you to post. They also suggest comments for people to post as replies to your post. They are planning how to make sure our favourite influencers can keep engaging us with avatars, even when they go to bed. Creators won’t have to re-upload content when they are physically and spiritually exhausted – they can click a ‘Remix’ button.
We could scroll forever, and never hit the end.
Almost three years ago I watched a human being writing a full message by continuously tapping the middle text suggestion. The memory haunts me.
Investors
A lot of recent market leaders got to the top by focusing on scale and market share, hoping to monopolize prices once they have captured all the users. But as they disrupted costs and regulations on their way up, others took note. By lowering the barriers of competition, they have made it harder to recover their massive startup costs.
The financial world is less naïve about this strategy now, but it still sounds good when people say GenAI will transform business efficiency. I don’t think most investors are thinking of the net benefit to consumers when prices go down. I think they expect that the companies will hit the jackpot, so they can cash out. But some investors think ahead, and cash out before we know if the jackpot was for real.
If the old pattern repeats and competition keeps GenAI prices low – I think it will – some investors will make fortunes before the rest catch on. If GenAI doesn’t find a way to break even, some are already making fortunes. If GPU manufacturers’ high stock prices keep going down as new research comes out, some have already made fortunes.
The economy
This double-edged sword thing has been fun. But that can’t apply when we zoom out to the economy, right? In the bigger picture, production is zooming up, while costs are coming down. Clear positive.
Actually, that second part is a problem: it’s hard to audit the costs of GenAI properly. Data acquisition costs are being decided in court. Energy and processing costs are not transparent. Environmental costs are questioned on both sides. Human impacts are hard to read. Existential threat… that is also up for debate.
But if costs do turn out to be dropping, then does the economy win?
It’s the economy, epithet
Quick economics primer (I hope I make my instructor proud.)
We can take increased production of goods and services as the first goal of modern economics. But there are two others. Two: create full employment. Three: stabilize the value of the medium of trade (street name: ‘money’).
The creative industry employs somewhere between 3 and 7 percent of the global workforce, with comparable contribution to global GDP. We could add knowledge work (which can bring it near 50 percent in total) – but that makes it harder to follow GenAI’s impact. Let’s stick with 5 percent – a twentieth of the world economy.
That is a wild range of estimates, I know. (The maximum is more than twice the minimum.) And it feels too low too… but I checked around.
Employment in a Gen-AI world
Is GenAI promising to create more employment? Currently, a big chunk of its opportunities are in gig-type work – content labelling and moderation. But the industry is hoping to reduce human involvement in these processes. Once we have fully agentic models, even the development side will need fewer engineers – the models will refine themselves, no real point in retraining from scratch.
Many GenAI advocates will tell you bluntly, “AI is coming for your role.” But they do suggest how you can stay valuable: by mastering this first generation of non-agentic, quirky tools.
The way I read this: even if the next generation of GenAI doesn’t make this one look silly, maybe you should be ready to lose your employment status. That way, you’ll be ready. You may not even mind being replaced when your AI hustle is thriving.
Sidenote: is app building going to become a cottage industry? And a follow-up: how many apps have you installed this year… and how many did you pay for?
If you aren’t ready to be replaced – or if next-generation models don’t leave room for human in-betweeners… sorry. Economics would count you with the ‘structurally unemployed’. You need to switch industries and go somewhere AI can’t go.
If this is a fair way to read what we have been promised, then GenAI isn’t here to increase employment. Instead, it will cause a lot of turnover and job shuffling. Hopefully most people survive the transition, but the net figures would be lower. Economic goal two does not look good.
Value in a GenAI world
Unfortunately if models keep competing, cottage app production won’t make sense. We talked about how production barriers can help with quality control; they also define how pricing works. Fewer costs, more competitors. More competitors, lower prices. If costs are low enough, somebody will leverage ‘free’, hoping to cash out through the attention economy.
But every consumer will be seeing the same ads for GenAI tools with “just a click!”. If any product costs more than a fraction of a GenAI subscription, and the value looks even similar, I should just get that instead.
So premium products in this future might charge what entry-level products charge now. Very often, the only price will be your analytics data. Across the industry, that is what I expect. If GenAI powered the product or service – if it even looks like it could have been generated – it should, and eventually will, sell for less.
On the other hand, ‘handmade’ products will be the new premium, just like with older sectors disrupted by tech. Typically though, things becomes premium when they are scarce – so that increase will probably not offset the general value drop.
From one angle, this is a good thing: consumers can afford more with their money than they did yesterday. But that makes it less attractive for producers to make as much as they did yesterday. Economists say serious deflation is a bad idea. So economic goal three doesn’t look good either.
If all this upheaval is only happening to 5 percent of the economy, maybe we can handle that. Maybe the creative boom has had a good run. Possibly, it could lose its appeal and life goes one… But these shocks will definitely affect all knowledge workers, and other sectors too. Even in roles that AI cannot fill, new competition for jobs will drive wages down.
Long sidenote: why those economic goals anyway?
We call a job a livelihood. That is worth thinking about: a job is a means to live. We have it in our religious codes that work is necessary, even that lazy people deserve to be hungry. In our era, the word “working” and “for” go together. Most of us don’t work to grow food to eat. We work to produce goods and services. Then we trade that for a medium of currency. Then we trade the medium of currency for food.
Humans and complexity. How else would we have built artificial minds with spreadsheets.
How much food (or housing, or services) we can get, depends on that currency. So if we do work for a stated price, we want to know what that amount can buy. We built this whole tangled economic system so we could have more certainty. If I don’t know what I can afford next month, it’s harder to focus on any task. This is true even if I now think I can afford more – most people don’t get more focused when the jackpot hits.
Sidenote: it is less true with creative work. As our needs are met, we have more space and time to express ourselves. However, even hunger can chase some artists deeper into the safety of creative process. So we have starving artists, and also trust-fund artists – but the work usually shows which way the environment was pushing.
So we need employment to get money, and we need money to survive. Even if a lot of art might survive drastic changes in the creative economy, few people would say that our society should gamble on that.
So new economies, maybe?
We could encourage small-scale farming again, to simplify the economic chain. That would be nice, but it would take some planning. (Before the industrial age, governments supported farmers against drought, and provided more free utilities and services.)
We could also try just straight-up giving money to people. This idea started to look more realistic when crypto took off. (Remember crypto? Good times.) I saw some serious proposals for universal basic income back then.
The last big mention I remember was Sam Altman’s Worldcoin – they seem quiet since their big biometric harvesting campaign.
Would GenAI be more or less valuable if nobody needed it to get fed? When governments put money in people’s wallets, they tend to buy more art supplies. Personally, if my basic needs were sorted, forever? I might not even take commissioned projects. I’d start a garden, build a studio workshop by myself, and dive deep into year-long obsessions.
If most people are like me, then I can imagine GenAI tools being limited to corporate communications alone. That would be a strange, short-lived state… In every generation, commercials have to sound more like people, otherwise we tune them out. In the end, commercial work would need even more human validation than it needs today.
New values, probably
If I’m on track with that, then GenAI might teach us to value human connection more. That would be an embarrassing lesson to learn from bots, wouldn’t it?
GenAI saturation will transfer higher value to ‘handmade’ products, just like a shawl from a village weaver costs much more than the best factory-made option.
This reaction may not be fun for everybody. Some ‘handmade’ artists may be left behind as models absorb their signature styles. Some new creators could be accused of passing generated work as their own.
Sidenote: maybe we should all start honing our personal touches, and strengthening our connections to the communities that matter to us. I don’t think we’d regret it, even this hunch is wrong.
Where does GenAI need help?
This question is probably the one I hear the least around me. Advocates often leave us with, “Don’t get left behind”, or “You might as well take advantage.” The other side usually warns, “It could end life as we know it.” But humanity is more beautiful when it looks beyond self-interest.
I think it is great that we say please to these chatbots… for our own sakes.
This part is where you’ll probably recognize talking points from anti-AI views. I started by making a case that the industry is shooting itself by sidelining these views… But interestingly, watch AI CEOs and they are happy to at least discuss regulation and policy. We can do that much, I think?
Foundations and guardrails
If I tell you that I’m trying for a baby, but you know I’ve done nothing to prepare for fatherhood… You may not talk, but you’d worry. Now imagine if I keep mentioning how much value I expect to get out of this child, how it will improve my life. You might say something.
I think the world should hope that the first artificial consciousness is ‘born’ to mindful ‘parents’, surrounded by a responsible community.
What is the push for artificial general intelligence about? It means several corporations and nations are racing to create a human-level consciousness that they can own. At some point, the research may even allow someone to clone specific identities. An economic discussion cannot cover this situation.
Research shows that people with less AI experience are more likely to apply human values in response to a bot. Also, when we can’t be sure if we are only interacting with humans, we show less empathy to the whole group. Meaning that as AI saturation grows, we can act more callous – and if we don’t resist this attitude, all of our relationships will suffer.
On the models’ side: current agentic research is trying to simulate emotion to align AI behaviour. This means that some agents will be designed to have – or pretend to have – feelings about their interaction with us. We block children from parts of the internet that might not help their exploring minds… Are there any parts of the web that we should protect these emotional bots from? Remember, our digital world will be their whole world. They don’t have an IRL. We can act crazy on the web, then come offline to detox. They would have to live online with our digital decisions.
Have you thought about this before? I was surprised to find AI ethicists having these conversations – definitely not in the mainstream.
Non-technical input
That is a real job: AI ethics expert. AI companies don’t just need coders. They are trying to capture the full experience of intelligence, beyond what computers have ever explored.
They particularly want artists and philosophers to contribute to alignment. (That is, calibrating AI models to our values, as well as our tastes.) They need social and linguistic insights to reduce bias in future models – because basically every dataset, like the internet itself, is limited in perspective.
For sure, the continent of Africa is less represented than the nation of Reddit.
But input isn’t limited to the project space of an LLM team. We actually have little direct impact on any model, at present. But our social weight affects AI research and policy.
Sidenote: GenAI models are not learning directly from your prompts. They are not allowed to, partly because trolls. Your feedback and analytics are aggregated and analyzed for general trends. Those trends may or may not be referenced by the product strategy for the company. If it is, then engineers work to see how to encode it as a training target.
Our social impact is real, and our creative voices are powerful. Concepts from films like ‘Space Odyssey’, ‘She’, and ‘Blade Runner’ keep showing up in the ambitions and concerns of the industry. Yes, powerful art does it better… but even our conversations can move the needle.
GenAI needs to know where humanity wants to go with it. It is useful if you share how it is helping already. It is also useful if you share how it is not helping. (Maybe even more useful? The status quo isn’t hard to keep up.) I like the ‘red-team’ analogy here – without serious adversarial testing, how can we trust these tools with our lives and our world?
If you can prove that something can be done efficiently without GenAI’s current abilities, that sets a higher benchmark for performance. If you boycott bad models, you tell companies to prioritize the good ones. Even the content mills can help here: as volume users, they can ensure that wonky engines die, and good ones prosper.
And then maybe we can focus on destroying them next? That would be nice.
Organic content
One thing GenAI really needs, funny enough, is for you to do things it can do, without using it. LLM research still works with massive datasets – too large for them to meaningfully curate the content. (Currently you can’t build a robust LLM with just the works of art legends. The best you can do is fine-tune existing models to that standard.)
If research doesn’t change this approach, then the next breakthroughs may take more data than the internet actually has. And yet, its quality depends on good data. If we just put anything out there, polished garbage comes out.
This has two big implications. One: new datasets will be less curated, less analyzed by source, just because beggars cannot be picky. Already torrent farms are pitching their services to the sector; if they haven’t already taken the bait, they must be very tempted.
Second problem: companies have had to try padding their datasets with generated content. That would be such a perfect solution if it worked… It did not. The technique has gone terribly so far. Sometimes the entire model needs to be scratched.
This would be enough of a problem already, if content mills weren’t also pushing floods of generated content on the Web. But they are… For me it feels like 1 out of 10 in YouTube search, 2 out of 5 in web search. I’d be glad to hear that I’m exaggerating.
Some experts are calling for laws about access to datasets collected before 2022, when GenAI hit the web. They say governments should archive and distribute that clean data to all, to encourage research and competition. Otherwise the companies that got started before AI slop would have an unbeatable advantage.
That is a very weird problem, isn’t it?
Another nightmare for GenAI companies: when usage surges, server farms can hit capacity, really painfully. The wear and tear on their GPUs is putting pressure on a business model that is still on probation. This is why companies are questioning whether unlimited quotas make sense at any realistic price.
Sidenote: we’ve been assuming that GenAI is a transformative good. Here in Ghana, cars go to the shoulder when public service vehicles are coming through. Maybe we should treat this the same way, because other needs are greater?
So advocates sometimes say we have to use these tools in order to make them better. What if that isn’t the best way? What if by putting 5000+ words out here, I have done more good for GenAI than if I generated 50,000?
Didn’t generate any of this, true for God… I know how this looks though. I actually overuse semicolons. And I installed a plugin just to handle punctuation and special characters.
Conclusion
I don’t have one. I’ve said my bit, hoping you’ll add your two pesewas. If a comment directly solves any of these issues, I’ll edit the post to quote it after the relevant paragraph.
Thanks for hearing me out! I hope it’s useful to your thinking. That would be a real honour, because I believe human learning is more transformative than machine learning.
I’ll leave you with this quote, found in the newsletter that pushed me to write this:
“The reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself. Therefore, all progress depends on the unreasonable man.”
George Bernard Shaw
Links
- Statistics of Common Crawl Monthly Archives – Distribution of Languages: useful table showing how languages are represented in a popular general dataset.
- [TED Talk] A talking, squawking parrot: This is more philosophical maybe. Experts often label AI as a ‘stochastic parrot’… In this video a legit parrot makes a claim for consciousness – and honestly I buy it, enough that it makes me uncomfortable.
- Can We Teach our Moms to Spot Fake Ai Videos?: [YouTube] The last few minutes of the video brought the real insight for me: context is how we express our reality.
- Updated Grok Prompts (Prompts for our Grok chat assistant and ‘the@grok’ bot on X): I had to come add this. The recent messup shows why agentic AI will suffer in our messy world, for sure. But this file shows how social pressure changes product strategy, and then model output. (It’s in plain English, that’s how fast fixes are done.)
- Copying style, Extracting value: Illustrators’ Perception of AI Style Transfer and its Impact on Creative Labor: Researchers invited artists to fine-tune and test a Stable Diffusion model and test the results. Interesting insights, interesting technical limitations too – if they are still valid.
- I Tried Hostinger’s New Horizons AI Developer Tool: Is the Hype Justified?: His coding experience allowed him to leverage it for speed – and for free. Non-coders would not have it so easy… Also, a bit of an ethical blindspot.
- Understanding the ‘Slopocene’: how the failures of AI can reveal its inner workings: applies the fact that every GenAI output is a hallucination… but does it in a very experimental way.
- Where’s Your Ed At? : [Substack] This investigative journalist is giving very hard financial forecasts about the GenAI industry. But he invites challenge, doesn’t believe in hedging bets.
Leave a Reply