2 Comments
May 6, 2023·edited May 6, 2023Liked by Paul Baier

This talks beautifully about how the competitive nature of advancements are pushing without controls. I have to partially agree that these unregulated advancements can be very dangerous, especially problem of falling powerful model into wrong hands. The situation is similar to development dynamite. Alfred Nobel created dynamite to help people in building and mining, but others also used dynamite to make bombs, canons, and rockets used in wars. Nobel wanted his inventions to help people. Instead, dynamite earned money by hurting people and damaging buildings during war. Before similar situations arise for artificial intelligence, we need to take proper steps.

Another aspect which is rightly articulated that the model is now far more capable than an average human being, due to its ability to learn from widely available world web data and learning massively at scale by instantly sharing information with other models.

Expand full comment
May 6, 2023·edited May 6, 2023Liked by Paul Baier

Yes a lot of this makes sense. Although I do not agree that it is inevitable for AI to adopt evil motives overall. Sure a terrorist AI will inevitably exist, but the main body of humans can also build a guardian AI with good motives to protect us. Just as we have FBI today because some people go rogue. (Given the pace of development, let's start working on that now.)

Expand full comment