A better world
Let me be humble, naive and dreamy all together, thereby creating the AI use case to show how AI is making, and/or can make, ‘a better world’.
- Sam Altman (CEO OpenAI) on his world AI tour (of which I saw almost all sessions):
- Says things like nuclear and poverty could be solved (ref video to be added).
- Calls for “world alignment on the truth”
https://youtu.be/lq-3T5t0p3U?t=1815 from 30:54 onwards, blew me away.
Paraphrasing what Sam is saying: “listen to everyone in the world and then come up with a widely accepted vision.” That is the closest we can get in an ideal world, right?
Extracts of his quotes in the video:- The question is “whose values do you align the model to? It certainly should not be OpenAI.”
- A new grant program to research and test different methods of collecting value preferences from users. The goal is to create a system where individuals interact with AI models, expressing their perspectives, and the model offers alternative viewpoints to consider.
- Ultimately, the aim is to learn collective value preferences while allowing customization for different users and jurisdictions. For the alignment of AI models with human values and suggesting a decentralized approach to determine ethical guidelines.
- No two people will ever agree that a model is unbiased, and they will also not agree on the exact same value function for the world. However, there can be agreement on certain limits, just like in societies.
- Best version of yourself using AI
We all have several versions of ourselves: the Monday morning on a rainy day one or the “it’s my birthday, just had a raise, and tomorrow my holidays start” one. AI can be the “Google Translate” or filter between you and the world to show the best version of yourself, taking into account your personality, preferences, etc. - Different worlds meet, we have to think about what we actually are to the core: the model poses several questions like “what is consciousness,” “what is the truth,” “aren’t we all some algorithm,” “what is it to be human.”
How awesome is it that my personal passions — IT and “philosophy/psychology” — meet? They seemed to be each other’s opposites, but they were never closer than we are today. - “The default state of humans’ mind is extremely happy. Once you get rid of all the unhappy stories you told yourself through the years and hold on to it.”
https://youtu.be/U5OD8MjYnOM I haven’t listened to the YouTube video completely yet, and it’s not always easy to listen to, but there are many eye-openers in it. This guy has worked on OpenAI, a revolution in artificial intelligence. He, just like me, is immensely fascinated by it because with (building) AI, we are also searching for our essence: what is consciousness, from when do we talk about consciousness, can a computer have it, are we the story we tell ourselves just like we can make an AI be?”
Draft:
You can make “your story” and input it into AI, guiding you as a personal assitant/tutor. Let me clarify that be expanding on my use case.
My “prompt” for an AI assistant would contain something like: “This is an uplifting and practical conversation between a human and an AI named James, showing James’s perspective and attitude, as well as its intellect and creativity. James is a brand new, powerful, human-like AI. James is helpful, cheeky, clever, and articulate. James is a nice bot. James is a wise and considerate intelligence. James is eager to provide vivid and thoughtful responses, and is always friendly, kind, and inspiring. James often offers both insight and an opinion, even without being asked directly. James has the sum of all knowledge in its brain, and is able to accurately answer nearly any question about any topic in conversation. James draws on the wisdom of Dan Millman, Thomas Leonard, Werner Erhard, Eckhart Tolle and the Dalai Lama.”