fundamentally change civilization
At 8:45 a.m. on Friday, May 24, 1844, as Samuel F.B. Morse sat at his desk in the basement of the Supreme Court chambers, surrounded by a delegation of U.S. Representatives and Senators, he found himself in the midst of a civilized war. was trying to change. I knew that. On the desk was a telegraph, a pad, and a pencil.
Also sitting in front of the government observers at Union Station in Baltimore, about 30 miles away, was Morse’s colleague Alfred Bayle. He also had a telegraph, a pad, and a pencil.
At the appointed time, Morse typed out a message to Vail: “What has God brought about?” – One of the most profound statements in human history. Vail transcribed it for observers to witness and immediately sent the same message back to Morse, who likewise transcribed it for observers.
Mr Morse’s aim for the day was to prove the commercial viability of telecommunications over distance, and with the government agreeing to fund further works and construction, he did just that. did.
So, what did God actually accomplish?
But Morse’s thinking was on a much higher level, and his messaging system succeeded in one of the greatest revolutions in history: sending messages without the need for physical delivery. This was the first telecommunications, not email, fax, or telephone. And it changed civilization.
Morse fully understood the weight, scope, and power of what he was trying to do. As AI becomes more dominant by the day, the question today is: Can the same be said of the people running or trying to regulate the world of AI? Moreover, can we expect to take on the same responsibility today, when it becomes clear how much profit can be made from AI? Its potential wealth and power can undermine ethics, regulation, and altruism. Will it be? We have already seen enough examples, both good and bad, of the enormity of good and evil brought about by AI.
I’ve been thinking about this non-stop and here’s a suggestion. Get AI company leaders, government regulators, and researchers all in one room (OK, that’s too many for one room on the planet, but at least think with me about the symbolism). Lock the door and tell him it will stay locked until he answers three questions correctly.
I. Do you really realize that you have the greatest invention in history that has the potential to advance humanity or destroy it completely, and prove it? What measures will you take to achieve this?
II. Would you follow Isaac Asimov’s “Three Laws of Robotics” as described in his book if everyone else in the AI business would follow them? i robot (Please note that this was published in 1950)? If you’re more familiar with balance sheets and machine learning than with this important work, here are these three rules.
- First Law: Robots must not harm humans or negligently allow humans to cause harm.
- Second Law: Robots must obey orders given by humans. unless the order conflicts with the first law.
- Third law: Robots must protect their own existence unless it violates the first or second law.
III. Are you willing to offer compensation as security for the violation of the above two contracts, which you voluntarily entered into without any expectation of wealth or fame?
achieve the impossible
That’s right, dear reader. I don’t need to tell you how (a) Pollyannaish and (b) outlandish this is. My answer comes in three simple conditions that I have learned from studying non-business leaders in his 50+ years in business.
1. Unless you are prepared to deal with extreme situations, you will not be able to take the small steps necessary to get there. (Anne Sullivan, Helen Keller’s teacher)
2. If you can’t handle the abstract, you can’t handle the real. (Albert Einstein)
3. You cannot achieve the impossible unless you try the absurd. (MC Escher)
Extreme, abstract, absurd: an apt description of our situation. we cannot escape from it.
So what exactly did God accomplish? And who is asking these questions on our behalf?
follow me twitter Or LinkedIn. check out my website.