in

Nay I

Why this post?

I started think­ing about this last year, put it off since February maybe. Stumbled on this Bizarro Devs newslet­ter this Thursday, went to note down the link… My note was ten para­graphs long. So here I am.

But I do worry about how it will land. I don’t want people to think I’m tar­get­ing GenAI advo­cates or users. I don’t want to be that guy. The enemy-of-progress guy, the lack-of-vision guy…

But there’s another char­ac­ter in every inno­va­tion story: the tester, the reviewer. The “we have effects now, but take your time” guy. The “cars are great, but people are squishy – make seat­belts and speed limits” lady.

Good timber does not grow with ease:
The stronger wind, the stronger trees;
The fur­ther sky, the greater length;
The more the storm, the more the strength
.

dou­glas mal­loch, ‘good timber’

The soft­ware world cel­e­brates hack­ers and researchers. Without these ‘red teams’, apps would be buggy. AI com­pa­nies have red-team devel­op­ers, but they can label red-team feed­back as per­se­cu­tion or short­sight­ed­ness. Both types of fric­tion do the same thing: make a more robust and useful prod­uct. Without either type, any growth may be warped.

What’s the plan?

I’ll share con­cerns I haven’t noticed in my local space, hoping for more nuanced chats. I’m not a tech person, just tech-adja­cent. I can only take a broad view – but that is quite useful. When the big pic­ture and the inside per­spec­tive clash on a project, the detail is more likely to change. Often though, the two can coexist.

I’ll focus on the cre­ative sector, but steer away from soft­ware devel­op­ment. That field is all-in on GenAI, but their expe­ri­ence is quite dif­fer­ent. They self-host and hack their tools. Also, code is not a final product. 

Sidenote: using GenAI util­i­ties, as a single step plugged in to your work­flow, is not the focus either. (That’s how most pro­gram­mers use it, I think.) I mostly use util­i­ties myself… The issues do extend to this area, they are just harder to trace here.

I’ll gen­er­ally assume that GenAI will do every­thing that has been promised. I often project things this way: what are the worst sides to the best-case scenario?

I still moon­light as a writer, and aca­d­e­mic work helped my thought process. (Just did Economics, which should come in handy.) But a formal, cita­tion-heavy essay feels like the wrong tone. I’ll use fewer stats, and focus on things that make common sense. May add a small list of links at the bottom though. 

Please feel free to fact-check and respond to any­thing. There’s a com­ment box at the bottom.

What’s the mood?

I want to have a sin­cere con­ver­sa­tion. I won’t pre­tend to be neu­tral – I’d prefer a cre­ative world where GenAI was just a util­ity or nov­elty, not a com­pet­i­tive neces­sity. I’m not trying to ambush you. 

I’ll say things that can be labelled as ‘anti-AI’ talk, or tagged to a polit­i­cal view. Remember, nuance takes in every angle. Also, com­ments are open – for real.

It’s going to be long… sorry

If you’re like me, you would­n’t mind if this was a video essay… I could break this into mul­ti­ple posts, I guess? But I want to follow the thread through – if you need to go when­ever, have a blessed day. 

Okay. “Take a deep breath. Work from the beginning.”


Where is GenAI sup­posed to help?

Knowledge work

It should min­i­mize rou­tine tasks for office work­ers, so we can focus on exec­u­tive func­tion… But we often use the mun­dane work to refine the idea – for me, the plan depends on how each step goes. Maybe the same for you?

GenAI advo­cates note this by saying, “Each gen­er­a­tion of tools has reshaped the jour­ney.” Very true. But no tool ever offered to elim­i­nate the jour­ney. Many GenAI users are now using it to ‘jump­start the project’. In other words, the human is a pas­sen­ger from the outset, even if they have the map. That is never the most enjoy­able way to col­lab­o­rate, is it? Nobody has the best time.

Actually, if col­lab­o­ra­tion was easy, you could solve most project blocks by talk­ing to somebody.

I think GenAI has helped my col­lab­o­ra­tive skills. Accepting a part­ner’s style or flaws. Politeness and patience. Giving clear, yet pos­i­tive feed­back. We could call it ‘MayI’.

Otherwise you could brain­storm; take a walk; call it a day, come back later with fresh eyes… Or could you? Maybe the hustle just won’t wait. 

Few work­ers are report­ing that GenAI gives them more breath­ing space. As it increases our effi­ciency, the time just dis­ap­pears. Either our in-trays are expand­ing, or else pro­cras­ti­na­tion and burnout are flood­ing the space. Has your qual­ity of work really benefited?

I can’t say mine has. So far I have only pub­lished code from GenAI, but already clients are assum­ing that I would (making them more demand­ing) or they do them­selves (which com­pli­cates the process.)

Please note: a gen­er­ated brief does not inspire confidence.

Creative work

Creative work is knowl­edge work, so the points over­lap. But GenAI can bring a lot more people into the cre­ative econ­omy by low­er­ing the skills bar­rier. To a more expe­ri­enced pro­ducer though, the bar­rier is scale - now, if you have the plan, you have infi­nite cre­atives for it.

This means GenAI-pow­ered teams need fewer sup­port staff – pos­si­bly every­thing beneath art direc­tor, but likely all entry roles. In turn, com­pa­nies need fewer teams, fewer agen­cies… Basically, now dom­i­nant firms can gate­keep more effec­tively – with care­ful mon­i­tor­ing, any small cre­atives’ work can be used as ‘ref­er­ence’ before they can lever­age it themselves.

General cre­ativ­ity

We should use it for more – maybe even better – ideas. But its most orig­i­nal leaps are called hal­lu­ci­na­tions, and engi­neers are trying to reduce those quirks.

Sidenote: it looks like our diver­gent think­ing, but it’s data com­plex­ity, not con­tex­tual. Your emo­tions and envi­ron­ment affect your think­ing in real time. GenAI uses mul­ti­di­men­sional con­nec­tions, but there’s a ‘Last Modified’ date.

By def­i­n­i­tion though, GenAI either deliv­ers the ideas that it can do best, or the most pop­u­lar ones. It even slows down the turnover rate of any trend. No tool will know a fad is ‘over’ until its next major release.

Social cre­ativ­ity

“You too can be an artist!” We’ve heard this a lot. But actu­ally, GenAI just gives you the client expe­ri­ence… at best, a taste of a man­ager’s power.

Generally, this era has seen actual making time reduce. Even phys­i­cal-media artists do less making, more talk­ing about making. Most big cre­atives do more man­age­ment and client rela­tions, less pro­duc­tion. GenAI advo­cates sug­gest that we can all pivot the same way… But exec­u­tives and direc­tors rarely get to do the easter eggs.

Unless you’re Marvel, where every easter egg can become a world war.

Look at it this way: trained cre­atives are in the minor­ity around the world. If ‘non-cre­atives’ only par­tic­i­pate by shar­ing gen­er­ated con­tent, each gen­er­a­tion of GenAI will have its sig­na­ture on media archives for those years. That does­n’t sound like a net pos­i­tive to me.

Sidenote: A lot of the input and direc­tion issues would be better if we all had per­sonal models that reflect our styles. But I don’t yet have the chops to install a LLM – and most users don’t even wish to try. Big agen­cies and pub­lish­ers are doing this though… another advan­tage for them.

Follow-up: Exactly​.ai is a ser­vice per­son­al­iz­ing models for image gen­er­a­tion. If you need more than 5 free daily cred­its, you’re look­ing at £240 a month. Maybe take a train­ing course instead…

edu­ca­tion

It is sup­posed to super­charge learn­ing in every way, with inter­ac­tive and tai­lored path­ways. Have you gotten to try any of these tools? They are less likely to be free, maybe because learn­ing tools have to meet higher standards… 

We need AI to sup­port learn­ing jour­neys, with­out remov­ing the fric­tions that are so key to deep under­stand­ing. We need it to be agen­tic, but strongly aligned, but bound to syl­labus, but very respon­sive to user feed­back and inter­est… That is still a wish­list for now.

Human aug­men­ta­tion

There is some debate about this: should AI do this? Would it be safe? But when we watch a speech-impaired person regain their voice through a dig­i­tal clone – hard to argu with that feel-good factor.

It is very excit­ing that GenAI is already help­ing people with dis­abil­i­ties to express them­selves in new ways. But we have to make sure that these tools don’t pinch them else­where. So com­pa­nies in this field have to meet very high stan­dards – and maybe the public ben­e­fit should influ­ence the price?

Most of that ben­e­fit is in the future though: this isn’t the AI that is “here to stay” right now. What we gen­er­ally have is AI that over­comes tech­ni­cal lim­i­ta­tions of skill/scale – gen­eral and code writ­ing, and music, image and video production.

I didn’t say “cre­ative skill”, because we already looked at that. Also, that word could deceive us here. We need to dis­en­tan­gle impair­ment from inex­pe­ri­ence – dif­fi­cult task, but it’s the only way to make sure we aren’t just out­sourc­ing our potential.

Learning and phys­i­cal dif­fi­cul­ties show up in tan­gi­ble and mea­sur­able ways. (For exam­ple, seri­ous tone-deaf­ness affects how you under­stand speech.) If you are blessed not to have impair­ment like that, please… ques­tion your lack of creativity.

If you can tell lies and have strange dreams, or use words with­out check­ing their def­i­n­i­tions – then your cre­ative impulse is alive and work­ing. True, explor­ing and train­ing it will take time and patience. But it always pays better than having it done for you.

Long side­note: we made cre­ativ­ity frus­trat­ing, On purpose

Human cre­ativ­ity is always grasp­ing for the new – new ideas, new processes, new expe­ri­ences. Our ances­tors’ music was mostly about rhythm. (Body per­cus­sion is still a great way to start your musi­cal jour­ney.) But then we added melodies, then stacked har­monies, then we decided that tonal­ity is actu­ally kind of boring… 

Tools were low­er­ing the bar­ri­ers to entry before GenAI – dig­i­tal media espe­cially. But art still takes time, because when a thing becomes easy, it loses value as an end prod­uct. We mess with it, com­bin­ing and remix­ing until we hit new lim­i­ta­tions… Or we might decide that it’s no longer worth doing. (A per­fect circle is only impres­sive if you drew it freehand.)

Simply put, we like the process.

This is a big reason why the cre­ative world is so intim­i­dat­ing to ‘those on the out­side’. Especially now that our algo­rithms don’t let us see enough casual, unpol­ished art – which is actu­ally most of the art out there. 

Practically speak­ing, most people can learn enough to con­tribute on their first day, let alone two weeks. Most people don’t stop there – but you could totally add value to a com­mu­nity expe­ri­ence with that. 

It is com­par­i­son that stops us from diving in. So fight that. And also, fight the urge to rate the expe­ri­ence by its effi­ciency – that is an indus­trial mind­set. Even as a career cre­ative, I resist that.

Where does GenAI really help?

Customer rela­tions

If you’re run­ning a con­sumer busi­ness, cus­tomer rela­tions is a real head-ache. But now, you can be respon­sive 24 – 7, while han­dling all requests by your chosen playbook.

Human cus­tomer reps have trou­ble stay­ing on-strat­egy: if they focus on being relat­able, they can go and apol­o­gize for some­thing that you never apol­o­gize for… Current chat­bots already do better at stick­ing to the script. Even if they aren’t too good at diag­nos­ing and solv­ing prob­lems, they can deflect and pacify all day long. Humans soon start com­plain­ing of burnout.

Corporate media and publishing

In the first sec­tion we looked at how GenAI empow­ers dom­i­nant pub­lish­ers. Human resource issues have long stopped big firms from cap­tur­ing more market share in the atten­tion econ­omy. But now!

Now game stu­dios can stop treat­ing pro­duc­tion work­ers like robots – and use actual robots. Book pub­lish­ers can stop con­tract­ing for hack work, and let their senior edi­tors gen­er­ate air­port paper­backs directly. Streaming ser­vices can keep you lis­ten­ing with­out having to lock young influ­encers in crazy con­tracts. (The big record labels will let them do it, because they are share­hold­ers. They will even help by ‘sign­ing’ dig­i­tal avatars.)

There will be much less exploita­tion of small cre­atives, basi­cally. We should all rejoice.

I had planned to avoid sar­casm, but I can’t help myself here. Sorry.

Sidenote: I still use (and make) static mock­ups… but every time I do, I feel silly. Who will be buying mock­ups and stock images in three years? 

Popular inde­pen­dent producers

Independents can also extend their pro­duc­tion capac­ity. This includes well-known cre­atives, but also sports people, and even plat­form-famous people. As GenAI grows, cat­e­gories are becom­ing irrel­e­vant. Soon ath­letes will be serv­ing us art con­tent, and artists will be doing stunts in their videos.

Content pow­er­houses should prob­a­bly research atten­tion over­load, so they know where to stop – because one A‑lister can anchor all their con­tent, in every lan­guage, on every plat­form, fifty times a day.

Sidenote: if the Exactly​.ai con­cept takes off, atten­tion over­load will be the only mech­a­nism cre­at­ing work for new com­mer­cial artists. One model blows up, fea­tures on a bunch of block­busters, then fades away so others can show.

In this con­text, col­lab­o­ra­tions make less eco­nomic sense. The only lure would be a niche audi­ence that hasn’t yet been exposed to the star. Camera-shy artists can expect even fewer gigs. A fea­ture could become a moral thing, like cor­po­rate social outreach.

It’s hard to sell out if nobody is even buying.

Another inter­est­ing thing: once the status quo is dig­i­tal, and most influ­encer con­tent is gen­er­ated, in-person engage­ments will cost much more. Maybe ‘fea­tured cre­ators’ could even charge for DM access.

Content mills

This is a weird prob­lem. And a big one, too. We usu­ally look at bar­ri­ers to pro­duc­tion like a bad thing, but at the same time we use them as a fil­ter­ing mech­a­nism. When a craft has high stan­dards, it mainly attracts people who respect what it stands for.

But now we have ser­vices that claim to deliver fin­ished prod­ucts “at the click of a button!” 

I think my inter­ests deserve more effort. Three but­tons at least.

We are crying about it in our media space, but soft­ware people have it worse, if any of the app-build­ing ads I’m seeing are legit. In total, every­day, thou­sands of people are pub­lish­ing apps and media with­out even a casual qual­ity check. 

We talk about how this is bad for busi­ness, for pol­i­tics, for com­mu­nity. It turns out con­tent mills are also biting the hand that fed them, and cor­rupt­ing GenAI itself. (Keep that in mind. We’ll come back to it.)

Why are people doing this? Because atten­tion powers the dig­i­tal econ­omy. Even if a ‘break­ing news’ video or knock­off app only gets two weeks of trac­tion before it is banned, it can really pay off. Whether they are look­ing for adver­tis­ing clicks, or polit­i­cal clout – as long as the costs are so low, they’d be dumb not to try.

And if the con­tent plat­forms make it impos­si­ble to pub­lish AI slop, they also lose out in atten­tion met­rics. They haven’t dared yet – so GenAI takes the fall.

Social media platforms

We used to call them social net­works, when they were all about con­nect­ing us together… But humans have such a low refresh rate. Even if we want to keep post­ing and scrolling, we just burn out.

Now plat­forms have started sug­gest­ing con­tent for you to post. They also sug­gest com­ments for people to post as replies to your post. They are plan­ning how to make sure our favourite influ­encers can keep engag­ing us with avatars, even when they go to bed. Creators won’t have to re-upload con­tent when they are phys­i­cally and spir­i­tu­ally exhausted – they can click a ‘Remix’ button. 

We could scroll for­ever, and never hit the end.

Almost three years ago I watched a human being writ­ing a full mes­sage by con­tin­u­ously tap­ping the middle text sug­ges­tion. The memory haunts me.

Investors

A lot of recent market lead­ers got to the top by focus­ing on scale and market share, hoping to monop­o­lize prices once they have cap­tured all the users. But as they dis­rupted costs and reg­u­la­tions on their way up, others took note. By low­er­ing the bar­ri­ers of com­pe­ti­tion, they have made it harder to recover their mas­sive startup costs. 

The finan­cial world is less naïve about this strat­egy now, but it still sounds good when people say GenAI will trans­form busi­ness effi­ciency. I don’t think most investors are think­ing of the net ben­e­fit to con­sumers when prices go down. I think they expect that the com­pa­nies will hit the jack­pot, so they can cash out. But some investors think ahead, and cash out before we know if the jack­pot was for real.

If the old pat­tern repeats and com­pe­ti­tion keeps GenAI prices low – I think it will – some investors will make for­tunes before the rest catch on. If GenAI does­n’t find a way to break even, some are already making for­tunes. If GPU man­u­fac­tur­ers’ high stock prices keep going down as new research comes out, some have already made fortunes.

The econ­omy

This double-edged sword thing has been fun. But that can’t apply when we zoom out to the econ­omy, right? In the bigger pic­ture, pro­duc­tion is zoom­ing up, while costs are coming down. Clear positive.

Actually, that second part is a prob­lem: it’s hard to audit the costs of GenAI prop­erly. Data acqui­si­tion costs are being decided in court. Energy and pro­cess­ing costs are not trans­par­ent. Environmental costs are ques­tioned on both sides. Human impacts are hard to read. Existential threat… that is also up for debate. 

But if costs do turn out to be drop­ping, then does the econ­omy win?

It’s the econ­omy, epithet

Quick eco­nom­ics primer (I hope I make my instruc­tor proud.)

We can take increased pro­duc­tion of goods and ser­vices as the first goal of modern eco­nom­ics. But there are two others. Two: create full employ­ment. Three: sta­bi­lize the value of the medium of trade (street name: ‘money’).

The cre­ative indus­try employs some­where between 3 and 7 per­cent of the global work­force, with com­pa­ra­ble con­tri­bu­tion to global GDP. We could add knowl­edge work (which can bring it near 50 per­cent in total) – but that makes it harder to follow GenAI’s impact. Let’s stick with 5 per­cent – a twen­ti­eth of the world economy.

That is a wild range of esti­mates, I know. (The max­i­mum is more than twice the min­i­mum.) And it feels too low too… but I checked around. 

Employment in a Gen-AI world

Is GenAI promis­ing to create more employ­ment? Currently, a big chunk of its oppor­tu­ni­ties are in gig-type work – con­tent labelling and mod­er­a­tion. But the indus­try is hoping to reduce human involve­ment in these processes. Once we have fully agen­tic models, even the devel­op­ment side will need fewer engi­neers – the models will refine them­selves, no real point in retrain­ing from scratch.

Many GenAI advo­cates will tell you bluntly, “AI is coming for your role.” But they do sug­gest how you can stay valu­able: by mas­ter­ing this first gen­er­a­tion of non-agen­tic, quirky tools. 

The way I read this: even if the next gen­er­a­tion of GenAI does­n’t make this one look silly, maybe you should be ready to lose your employ­ment status. That way, you’ll be ready. You may not even mind being replaced when your AI hustle is thriving.

Sidenote: is app build­ing going to become a cot­tage indus­try? And a follow-up: how many apps have you installed this year… and how many did you pay for?

If you aren’t ready to be replaced – or if next-gen­er­a­tion models don’t leave room for human in-between­ers… sorry. Economics would count you with the ‘struc­turally unem­ployed’. You need to switch indus­tries and go some­where AI can’t go.

If this is a fair way to read what we have been promised, then GenAI isn’t here to increase employ­ment. Instead, it will cause a lot of turnover and job shuf­fling. Hopefully most people sur­vive the tran­si­tion, but the net fig­ures would be lower. Economic goal two does not look good.

Value in a GenAI world

Unfortunately if models keep com­pet­ing, cot­tage app pro­duc­tion won’t make sense. We talked about how pro­duc­tion bar­ri­ers can help with qual­ity con­trol; they also define how pric­ing works. Fewer costs, more com­peti­tors. More com­peti­tors, lower prices. If costs are low enough, some­body will lever­age ‘free’, hoping to cash out through the atten­tion economy.

But every con­sumer will be seeing the same ads for GenAI tools with “just a click!”. If any prod­uct costs more than a frac­tion of a GenAI sub­scrip­tion, and the value looks even sim­i­lar, I should just get that instead. 

So pre­mium prod­ucts in this future might charge what entry-level prod­ucts charge now. Very often, the only price will be your ana­lyt­ics data. Across the indus­try, that is what I expect. If GenAI pow­ered the prod­uct or ser­vice – if it even looks like it could have been gen­er­ated – it should, and even­tu­ally will, sell for less.

On the other hand, ‘hand­made’ prod­ucts will be the new pre­mium, just like with older sec­tors dis­rupted by tech. Typically though, things becomes pre­mium when they are scarce – so that increase will prob­a­bly not offset the gen­eral value drop.

From one angle, this is a good thing: con­sumers can afford more with their money than they did yes­ter­day. But that makes it less attrac­tive for pro­duc­ers to make as much as they did yes­ter­day. Economists say seri­ous defla­tion is a bad idea. So eco­nomic goal three does­n’t look good either.

If all this upheaval is only hap­pen­ing to 5 per­cent of the econ­omy, maybe we can handle that. Maybe the cre­ative boom has had a good run. Possibly, it could lose its appeal and life goes one… But these shocks will def­i­nitely affect all knowl­edge work­ers, and other sec­tors too. Even in roles that AI cannot fill, new com­pe­ti­tion for jobs will drive wages down. 

Long side­note: why those eco­nomic goals anyway?

We call a job a liveli­hood. That is worth think­ing about: a job is a means to live. We have it in our reli­gious codes that work is nec­es­sary, even that lazy people deserve to be hungry. In our era, the word “work­ing” and “for” go together. Most of us don’t work to grow food to eat. We work to pro­duce goods and ser­vices. Then we trade that for a medium of cur­rency. Then we trade the medium of cur­rency for food.

Humans and com­plex­ity. How else would we have built arti­fi­cial minds with spreadsheets. 

How much food (or hous­ing, or ser­vices) we can get, depends on that cur­rency. So if we do work for a stated price, we want to know what that amount can buy. We built this whole tan­gled eco­nomic system so we could have more cer­tainty. If I don’t know what I can afford next month, it’s harder to focus on any task. This is true even if I now think I can afford more – most people don’t get more focused when the jack­pot hits.

Sidenote: it is less true with cre­ative work. As our needs are met, we have more space and time to express our­selves. However, even hunger can chase some artists deeper into the safety of cre­ative process. So we have starv­ing artists, and also trust-fund artists – but the work usu­ally shows which way the envi­ron­ment was pushing.

So we need employ­ment to get money, and we need money to sur­vive. Even if a lot of art might sur­vive dras­tic changes in the cre­ative econ­omy, few people would say that our soci­ety should gamble on that.

So new economies, maybe?

We could encour­age small-scale farm­ing again, to sim­plify the eco­nomic chain. That would be nice, but it would take some plan­ning. (Before the indus­trial age, gov­ern­ments sup­ported farm­ers against drought, and pro­vided more free util­i­ties and services.)

We could also try just straight-up giving money to people. This idea started to look more real­is­tic when crypto took off. (Remember crypto? Good times.) I saw some seri­ous pro­pos­als for uni­ver­sal basic income back then. 

The last big men­tion I remem­ber was Sam Altman’s Worldcoin – they seem quiet since their big bio­met­ric har­vest­ing campaign.

Would GenAI be more or less valu­able if nobody needed it to get fed? When gov­ern­ments put money in peo­ple’s wal­lets, they tend to buy more art sup­plies. Personally, if my basic needs were sorted, for­ever? I might not even take com­mis­sioned projects. I’d start a garden, build a studio work­shop by myself, and dive deep into year-long obsessions.

If most people are like me, then I can imag­ine GenAI tools being lim­ited to cor­po­rate com­mu­ni­ca­tions alone. That would be a strange, short-lived state… In every gen­er­a­tion, com­mer­cials have to sound more like people, oth­er­wise we tune them out. In the end, com­mer­cial work would need even more human val­i­da­tion than it needs today.

New values, probably

If I’m on track with that, then GenAI might teach us to value human con­nec­tion more. That would be an embar­rass­ing lesson to learn from bots, would­n’t it? 

GenAI sat­u­ra­tion will trans­fer higher value to ‘hand­made’ prod­ucts, just like a shawl from a vil­lage weaver costs much more than the best fac­tory-made option.

This reac­tion may not be fun for every­body. Some ‘hand­made’ artists may be left behind as models absorb their sig­na­ture styles. Some new cre­ators could be accused of pass­ing gen­er­ated work as their own. 

Sidenote: maybe we should all start honing our per­sonal touches, and strength­en­ing our con­nec­tions to the com­mu­ni­ties that matter to us. I don’t think we’d regret it, even this hunch is wrong.

Where does GenAI need help?

This ques­tion is prob­a­bly the one I hear the least around me. Advocates often leave us with, “Don’t get left behind”, or “You might as well take advan­tage.” The other side usu­ally warns, “It could end life as we know it.” But human­ity is more beau­ti­ful when it looks beyond self-interest. 

I think it is great that we say please to these chat­bots… for our own sakes.

This part is where you’ll prob­a­bly rec­og­nize talk­ing points from anti-AI views. I started by making a case that the indus­try is shoot­ing itself by sidelin­ing these views… But inter­est­ingly, watch AI CEOs and they are happy to at least dis­cuss reg­u­la­tion and policy. We can do that much, I think?

Foundations and guardrails

If I tell you that I’m trying for a baby, but you know I’ve done noth­ing to pre­pare for father­hood… You may not talk, but you’d worry. Now imag­ine if I keep men­tion­ing how much value I expect to get out of this child, how it will improve my life. You might say something.

I think the world should hope that the first arti­fi­cial con­scious­ness is ‘born’ to mind­ful ‘par­ents’, sur­rounded by a respon­si­ble community.

What is the push for arti­fi­cial gen­eral intel­li­gence about? It means sev­eral cor­po­ra­tions and nations are racing to create a human-level con­scious­ness that they can own. At some point, the research may even allow some­one to clone spe­cific iden­ti­ties. An eco­nomic dis­cus­sion cannot cover this situation. 

Research shows that people with less AI expe­ri­ence are more likely to apply human values in response to a bot. Also, when we can’t be sure if we are only inter­act­ing with humans, we show less empa­thy to the whole group. Meaning that as AI sat­u­ra­tion grows, we can act more cal­lous – and if we don’t resist this atti­tude, all of our rela­tion­ships will suffer.

On the models’ side: cur­rent agen­tic research is trying to sim­u­late emo­tion to align AI behav­iour. This means that some agents will be designed to have – or pre­tend to have – feel­ings about their inter­ac­tion with us. We block chil­dren from parts of the inter­net that might not help their explor­ing minds… Are there any parts of the web that we should pro­tect these emo­tional bots from? Remember, our dig­i­tal world will be their whole world. They don’t have an IRL. We can act crazy on the web, then come offline to detox. They would have to live online with our dig­i­tal decisions.

Have you thought about this before? I was sur­prised to find AI ethi­cists having these con­ver­sa­tions – def­i­nitely not in the mainstream.

Non-tech­ni­cal input

That is a real job: AI ethics expert. AI com­pa­nies don’t just need coders. They are trying to cap­ture the full expe­ri­ence of intel­li­gence, beyond what com­put­ers have ever explored. 

They par­tic­u­larly want artists and philoso­phers to con­tribute to align­ment. (That is, cal­i­brat­ing AI models to our values, as well as our tastes.) They need social and lin­guis­tic insights to reduce bias in future models – because basi­cally every dataset, like the inter­net itself, is lim­ited in perspective.

For sure, the con­ti­nent of Africa is less rep­re­sented than the nation of Reddit.

But input isn’t lim­ited to the project space of an LLM team. We actu­ally have little direct impact on any model, at present. But our social weight affects AI research and policy. 

Sidenote: GenAI models are not learn­ing directly from your prompts. They are not allowed to, partly because trolls. Your feed­back and ana­lyt­ics are aggre­gated and ana­lyzed for gen­eral trends. Those trends may or may not be ref­er­enced by the prod­uct strat­egy for the com­pany. If it is, then engi­neers work to see how to encode it as a train­ing target.

Our social impact is real, and our cre­ative voices are pow­er­ful. Concepts from films like ‘Space Odyssey’, ‘She’, and ‘Blade Runner’ keep show­ing up in the ambi­tions and con­cerns of the indus­try. Yes, pow­er­ful art does it better… but even our con­ver­sa­tions can move the needle.

GenAI needs to know where human­ity wants to go with it. It is useful if you share how it is help­ing already. It is also useful if you share how it is not help­ing. (Maybe even more useful? The status quo isn’t hard to keep up.) I like the ‘red-team’ anal­ogy here – with­out seri­ous adver­sar­ial test­ing, how can we trust these tools with our lives and our world?

If you can prove that some­thing can be done effi­ciently with­out GenAI’s cur­rent abil­i­ties, that sets a higher bench­mark for per­for­mance. If you boy­cott bad models, you tell com­pa­nies to pri­or­i­tize the good ones. Even the con­tent mills can help here: as volume users, they can ensure that wonky engines die, and good ones prosper. 

And then maybe we can focus on destroy­ing them next? That would be nice.

Organic con­tent

One thing GenAI really needs, funny enough, is for you to do things it can do, with­out using it. LLM research still works with mas­sive datasets – too large for them to mean­ing­fully curate the con­tent. (Currently you can’t build a robust LLM with just the works of art leg­ends. The best you can do is fine-tune exist­ing models to that standard.) 

If research does­n’t change this approach, then the next break­throughs may take more data than the inter­net actu­ally has. And yet, its qual­ity depends on good data. If we just put any­thing out there, pol­ished garbage comes out.

This has two big impli­ca­tions. One: new datasets will be less curated, less ana­lyzed by source, just because beg­gars cannot be picky. Already tor­rent farms are pitch­ing their ser­vices to the sector; if they haven’t already taken the bait, they must be very tempted.

Second prob­lem: com­pa­nies have had to try padding their datasets with gen­er­ated con­tent. That would be such a per­fect solu­tion if it worked… It did not. The tech­nique has gone ter­ri­bly so far. Sometimes the entire model needs to be scratched.

This would be enough of a prob­lem already, if con­tent mills weren’t also push­ing floods of gen­er­ated con­tent on the Web. But they are… For me it feels like 1 out of 10 in YouTube search, 2 out of 5 in web search. I’d be glad to hear that I’m exaggerating.

Some experts are call­ing for laws about access to datasets col­lected before 2022, when GenAI hit the web. They say gov­ern­ments should archive and dis­trib­ute that clean data to all, to encour­age research and com­pe­ti­tion. Otherwise the com­pa­nies that got started before AI slop would have an unbeat­able advantage.

That is a very weird prob­lem, isn’t it?

Another night­mare for GenAI com­pa­nies: when usage surges, server farms can hit capac­ity, really painfully. The wear and tear on their GPUs is putting pres­sure on a busi­ness model that is still on pro­ba­tion. This is why com­pa­nies are ques­tion­ing whether unlim­ited quotas make sense at any real­is­tic price.

Sidenote: we’ve been assum­ing that GenAI is a trans­for­ma­tive good. Here in Ghana, cars go to the shoul­der when public ser­vice vehi­cles are coming through. Maybe we should treat this the same way, because other needs are greater?

So advo­cates some­times say we have to use these tools in order to make them better. What if that isn’t the best way? What if by putting 5000+ words out here, I have done more good for GenAI than if I gen­er­ated 50,000?

Didn’t gen­er­ate any of this, true for God… I know how this looks though. I actu­ally overuse semi­colons. And I installed a plugin just to handle punc­tu­a­tion and spe­cial characters. 

Conclusion

I don’t have one. I’ve said my bit, hoping you’ll add your two pese­was. If a com­ment directly solves any of these issues, I’ll edit the post to quote it after the rel­e­vant paragraph. 

Thanks for hear­ing me out! I hope it’s useful to your think­ing. That would be a real honour, because I believe human learn­ing is more trans­for­ma­tive than machine learning.

I’ll leave you with this quote, found in the newslet­ter that pushed me to write this:

“The rea­son­able man adapts him­self to the world; the unrea­son­able one per­sists in trying to adapt the world to him­self. Therefore, all progress depends on the unrea­son­able man.”

George Bernard Shaw

Links

Leave a Reply

Your email address will not be published. Required fields are marked *