Viewing a single comment thread. View all comments

Artanthos t1_irsqmo3 wrote

Average office jobs started disappearing in the 1980s with the advent of the PC.

There were huge waves of corporate right-sizing. Things like typing pools don’t really exist anymore.

I expect the next wave to start next year. AI-generated art has reached a point where it can dramatically reduce (but not eliminate) corporate labor requirements for art production.

Another year or two and text generation AIs will start replacing programmers in bulk. It won’t eliminate programmers completely, but the few remaining will have very different jobs.

4

imlaggingsobad t1_iru3cs2 wrote

How do you think the job of a programmer will change? Do you think they'll be more of an ML/AI type of engineer? More data engineering? Less web development, but more complicated computer science stuff?

2

red75prime t1_irzri0t wrote

> Another year or two and text generation AIs will start replacing programmers in bulk.

Nope. In two years AIs will be more useful and will make less errors in the produced code snippets than, say, copilot, but you can't scale a language model enough for it to be able to make sense of even relatively small codebase to meaningfully contribute to. The AIs for starters need to have episodic and working memory to replace or vastly improve performance of an average programmer.

The demand for programmers could decrease a bit (or not grow as fast as it could), but no bulk replacement yet.

And no, it's not "my work is too complex to be automated that fast" (I'm a programmer). The current AIs do lack in certain aspects like memory, explicit world models, long-term planning, online learning, and logical thinking. I find it not feasible for those shortcomings to be overcome in a year or two. Maybe in 5-10 years.

2

Artanthos t1_is0v3c7 wrote

>you can't scale a language model enough for it to be able to make sense of even relatively small codebase to meaningfully contribute to

They were saying similar things about text-to-art just last year.

1

red75prime t1_is10a00 wrote

Superficially similar maybe. There are real technical reasons why you can get a pretty picture using the existing technology, but cannot employ the same technology to analyze a small codebase (say, 10000 lines of code).

With no operating memory other than its input buffer a transformer model is limited in the amount of information it can attend to.

For pictures it's fine. You can describe a complex scene in a hundred or two of words. For code synthesis that is doing more than producing a code snippet you need to analyze thousands and millions of words (most of them are skipped, but you still need to attend to them even if briefly).

And here the limitation of transformers come into play. You cannot grow the size of input buffer too much, because required computations grow quadratically (no, not exponentially, but quadratic is enough when you need to run a supercomputer for months to train the network).

Yes, there are various attempts to overcome that, but it's not yet certain that any of them is the path forward. I'd give maybe 3% on something groundbreaking appearing in the next year.

1