#society #politic - [[Artificial intelligence|AI]] generated content can reach higher quality and safety by “keeping a human in the loop”. Many start-ups just throw AI content to the user and is of low quality. - In my experience, [Codex](https://openai.com/blog/openai-codex/) / [Copilot](https://copilot.github.com/) is only useful for “noob” code, i.e., simple things like boilerplate, basic things, say, shuffle an array, solve palindrome, sort an array, all the easy questions from [https://leetcode.com](https://leetcode.com/), codex won’t work for complex code (good design, difficult algorithms & logic…) - AI generate [high quality art](https://openai.com/blog/dall-e/) but is a lot better off when working with the human instead of replacing it. - AI will automate white collar jobs first, not blue collar, [it’s very hard to interact with the physical world](https://openai.com/blog/solving-rubiks-cube/), but you can easily, say, automate vision or other human senses and repetitive tasks. Overall, today we have many assistants in the “”"""“virtual”""""" world, rather than replacing us (Grammarly, LanguageTool, Google Maps, GitHub Copilot, Ok Google, Siri…) To conclude, I think the “AI will take my job” aspect is the least concerning aspect of AI. What I am more concerned about is the impact of AI on information exposure on humans (how, when, what, quantity). - Programmers are going to **code more and more in natural language, human language, in higher-level language**, I bet you can already create a website now by saying “hey AI I want an e-commerce website with this design X…” - We need to teach and work with AI, maybe we will have sort of **AI teachers**, keeping the model aligned with our goals. - **Prompt engineers** (it’s already there), that is, when using GPT3 the input you give it influences a lot of the performance, say I want to create a function that talk about “black cats”, giving the model “this is a story about black cats:” will work better than “this is a story about cats:”