AI Future and alignment problem

The profit motive and fear pushing even smart humans to do things they know they shouldn’t.

The rules of our economy and political systems are badly designed where the incentives of individuals and the general are misaligned. Just this alone is enough to make solving the AI alignment issue hopeless.

The prisoner’s dilemma in game theory is only a dilemma because how the rules are set up. When the best choice for the individual will result a worse outcome for the population. As long as you can make more money by polluting the environment (and the fine for the damage caused is less then the profit increase) pollution will increase even though it will make everyone’s life worse in the long run. (It’s true even when it makes the polluter’s life worse because they are either legally required to maximize profit for the share holder, or they are afraid if they don’t do it then a competitor will and the environment will still end up being polluted and they will not even make a profit on it.) Rules, laws need to change so this is not the case. (But unfortunately some time in the past we got to a point where they could make so much more money that they were able to buy politicians to pass laws for them, to ensure they will make profit even if it’s bad for the people. And since without money politicians cannot get elected, we cannot change those laws… without getting money… and to get money we need to compete with businesses that make more money when they behave unethically… “Getting money out of politics” could maybe solve this, but how to achieve it is beyond me. At this point I really don’t see how could the system be changed without public hanging of corrupt politicians, to make sure elected officials represent the will of the people and not the wishes of some money interest.)

If doing the right thing (for the whole population) would benefit the individual more than doing a bad thing then it wouldn’t be a dilemma, the goles would be aligned and mostly good things would happen. But when there is a conflict of interest between individuals’ goals and the group’s goal then … to achieve group goals would require cooperation and trust between individuals. But unfortunately trust doesn’t seem to scale 🙁

We need to figure out a way to fight Moloch

Ex-Google Officer Finally Speaks Out On The Dangers Of AI! – Mo Gawdat
Max Tegmark: The Case for Halting AI Development | Lex Fridman
41:25 Mediations on Moloch
Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization | Lex Fridman
Sam Altman: OpenAI CEO on GPT-4, ChatGPT, and the Future of AI | Lex Fridman

Game advice

Don’t loose sight of what is the goal when you play a game.
To have fun!

If a game is no (longer) fun to play change it or stop playing it.

DM advice: D&D should be fun for everyone
https://clips.twitch.tv/LittleSullenCheesecakeRlyTho-uSff0C2sNQVfSdVu
https://clips.twitch.tv/ViscousLuckyNarwhalRitzMitz-B9CNh7CcDyJhXq7R
https://clips.twitch.tv/ScarySpikyLettuceSwiftRage-qoJzrfOKSrgA87cQ
https://clips.twitch.tv/TangibleAverageDumplingsBleedPurple-L6Hom7jiSPDIKgKD
https://clips.twitch.tv/EphemeralZealousScorpionStoneLightning-pqdHsiDC9PkTj3L0

Atmel Support

Yesterday I have opened a bug report support ticket at Atmel, now their support site is in maintenance mode.

Maintenance Notice

The page you have requested is temporarily not available. Please check back later. We are sorry for any inconvenience.

Coincidence? 😉