Elon Musk, Open AI, Guardrails … and Aliens | Coffee Talk

By | August 4, 2023

Hi Everyone, 

Tracey and I recorded this the other day. She had been looking into some recent moves by Elon Musk in the realm of artificial intelligence and we chatted about it over coffee for about 15 minutes. According to what she read, he is looking into creating an AI program with fewer guardrails in place, specifically in the realm of political correctness and so forth. That’s interesting and I suppose we shall see how this all plays out. 

We talked a bit also about Musk’s apparent attitude toward UFOs and aliens. Basically, the guy needs to talk to me. We’ll see if that can happen! 🙂 

Overall it was a fun, casual but hopefully interesting discussion. I hope you enjoy. 

Richard 

20 thoughts on “Elon Musk, Open AI, Guardrails … and Aliens | Coffee Talk

  1. MarkH

    Hi Richard and Tracey,
    I remember they unleashed 2 Ai on each other years ago and one eventually did away with the other. I’m already feeling sorry for all those Ai’s that go up against Elons’ version, they don’t have a hope in hell. I remember the RAND Corp published a white paper on what could be a national security and UFOs were down the list. Now they have published: https://www.rand.org/content/dam/rand/pubs/research_reports/RRA2400/RRA2475-1/RAND_RRA2475-1.pdf
    Perhaps Ai may do a better job of predicting what’s going to be a threat to national security if it’s given all the information otherwise we may all be vulnerable to President Ronald Reagan’s threat from above, not to mention all the local threats just for the sake of being politically correct. Who’s writing or deciding the subroutine narrative?

  2. HappyCup

    Sounds like the same thing as going to a bar and chatting up drunk folks. The difference is, sometimes someone at the bar can surprise you.

  3. JimmyBee

    Hi Richard and Tracey,
    What kind of coffee are you two drinking? Yeah, that’s a softball question, so what? 🙂
    I’m having Dunkin Donuts original blend. It’s good.

    Serious question: Have either of you ever read Atlas Shrugged?

    1. Richard Dolan Post author

      Coffee: I usually make mine black; Tracey takes a small amount of cream and sweetener. When we met, she liked her Starbucks while I was always a Tim Horton’s man — although Dunkin’ Donuts original is a close second to me. So there’s that. Regarding Atlas Shrugged, oh man I tried getting into that years ago and just couldn’t. Ayn Rand was a smart person who in my view is just too extreme in her position on human beings. I get it: she was an East European who escaped communism. Not hard to get her perspective. And there’s a lot in her individualist perspective I can agree with. But … no, I’m not really a fan or follower.

      1
  4. Andromeda107

    Elon Musk needs to read some of your books before stating there is no evidence of et’s coming here. I use to wonder is it possible that Musk is a hybrid or hubrid. Something about him is very fascinating. I am currently reading David Jacobs book (Walking Among Us) ,which is making me less incline to believe Musk is a hubrid; Elon Musk is to public,you see a lot a true expressions from him, unlike the the hubrids Jacobs talks about in his book, who has to be shown how to dress themselves ,dance,laugh or even smile. Nevertheless Mush is a fascinating character ,with and incredible mind .Although I wish he would do a some research on the ufo/abduction phenomenon before stating there is no evidence of et having come here ,or is here. Also William Shatner was touting that same line on NewsNation the other day that et’s haven’t been here It was very disappointing hearing him say that.I dropped the link to Captain Kirk’s disappointing interview. Thanks for sharing Richard and Tracey

    1. WBIsMe

      Courtesy dictates that, having spoken my peace, I should abstain from further comment. But, as many would agree, AI is an existential issue for humanity and perhaps for many plant and animal species as well. There’s so much at stake that I hope I can be forgiven a second comment.

      Richard wrote, “I do tend to think, though, that [Elon Musk’s] instincts are in the right place most of the time.” I’m not sure you’re aware, Richard, exactly how right you are, in at least one crucial respect–namely, that software decisions related to AI are almost all made by instinct.

      Essentially every decision pertaining to software–its design, construction, and operation–is based on hunch–i.e., on instinct. The civil engineer can calculate the stress on a concrete column and can look up the ability of a concrete column of given dimensions, etc., to stand up to a specified load. In other words, there’s an observational science and a mathematically based theory underlying civil engineering.

      But, simply put, there is no theory underlying software. And, in the absence of theory, software-related decisions are made by hunch. In most organizations, the work of one software developer is not even read, let alone inspected, by other software developers. There are theories related to software testing. But, there’s an enormous gap between those theories and the actual practices of software development and software testing. Very few organizations, even attempt to practice any sort of theoretically grounded, disciplined software testing.

      When it comes to research software, the situation is vastly less disciplined still. Research software development is typically left to graduate students who work under limited or no supervision and who have little or no training in disciplined software development (i.e., “software engineering”). Testing tends to be empirical and ad hoc: If it seems to work, that’s good enough.

      Put all this together and what we have in AI research software development is a series of decisions, each made by hunch and each likely made by an individual acting autonomously. The failure of any ONE of those decisions can lead to failure of the system as a whole. And the likelihood of a failure of the system as a whole grows exponentially with the number of such decisions. And, finally, as I claimed at the outset, failure of the system is potentially an existential threat to the human species.

      Software systems such as AI are FAR more risky than the nuclear reactors we worried about a few years back. There is science– theories and standards–underlying nuclear engineering. Nuclear safety is largely a matter of ensuring adherence to those standards (which, by the way, has been demonstrated by experience to be less than perfect). Software has no science and no standards whatsoever against which to test. Speaking frankly, I’m not sure how the situation could possibly be worse.

      I, for one, am not content for my future and the futures of my children and their children to be held hostage to mere hunches–not even my own hunches. We MUST somehow remove the word “existential” from the risk of AI systems. And, currently, we have no way of ensuring that has occurred. The only apparent solution, therefore, is to ban every AI system that poses potential existential risk and to enforce that ban with every means at our disposal.

      For what it’s worth, when it comes to software, my comments are not those of an amateur or a dilettante. I have been involved in software development since the mid-1970s. I earned a bachelor’s degree in computer science (before that discipline was dumbed down), a master’s degree in computer information systems, and a Ph.D. in information science. My dissertation concerned the measurement of a certain class of software properties. I have taught graduate courses in software engineering on site at the principal B-2 plant of Northrop Grumman. And, I have been invited to speak on software and software risks to the US Pentagon and to the UK’s Government Communications Headquarters, among other venues. Almost all the claims I have made here are not the product of my own quite fallible thinking but are widely understood concepts of the discipline known as software engineering.

  5. BrianRuhe

    I think Elon musk is a globalist Rothschild agent. He is positioned to be the hero, the good guy, to fool people. He comes from an elite family and acts like he was a poor student. Tesla was a car company that he took over- he never designed it.
    He spouts the global warming carbon lie, which is a scientific hoax.
    He is just another puppet who can’t possibly be inventing all these things he’s just a front man.
    Brian Ruhe

  6. J-Rod

    Elon Musk should stick to being a pin-up boy for hair transplants cos that’s about all he’s good for.

    1
  7. OC

    My problem with Elon is his view on Aliens.
    With all of his resources, his smarts, how can he not see what’s going on?
    Shouldn’t he be a little inquisitive about the subject?
    Something just doesn’t add up with his apparent 100% skeptical attitude.

    1
  8. WBIsMe

    Mr. Musk’s Wikipedia bio makes no mention of academic training in large-scale software development. (i.e., software engineering). And, the poor safety record of Tesla’s self-driving vehicle strongly suggests that Mr. Musk lacks an appreciation for the extreme difficulties attending large-scale development of software systems that may pose risks to humans. Along with many (most?) of those responsible for safety-critical software systems he seems to be infected with what I call “software hubris.” Since the early days of computing, software developers have characteristically underestimated the complexity of systems they undertake and have tended to deliver systems that pose “risks to the public.” (“Risks to the public” is the former name for what is now known as the “ACM RISKS Forum.” The ACM–that is, the Association for Computing Machinery–is one of two major US professional organizations for software engineers and is the world’s largest educational and scientific computing society.)

    Mr. Musk is not the man I could wish was proposing to lead the development of a sophisticated AI. He seems to think that including an “off” switch will avoid the most serious problems. But, I don’t see any reason to suppose that he understands how difficult it will likely be to implement an “off” switch that solves, rather than causes, safety issues.

    Years ago I was widely, though privately, mocked by my computer science students for my vocal pessimism regarding the safety of Tesla’s self-driving vehicles. “Tesla has done a lot of testing and has gotten all the bugs out” was the common opinion. I hope that my students recall my prediction. If so, perhaps they’re ready to learn a most important lesson. Testing to find and remove the bugs of a software system is a fool’s errand. The only approach capable of leading to safe systems is to avoid, in the first instance, the mistakes that lead to bugs.

    1
    1. Richard Dolan Post author

      Good counterpoints re Musk. I do tend to think, though, that his instincts are in the right place most of the time. When I compare him to other extremely powerful and influential public figures, he seems to me to be the most far sighted and (genuinely?) concerned about our future. Hey, maybe I’ve got rose-colored glasses on, I don’t know. But that is how he seems to me. But thanks for this perspective.

      1
    1. Richard Dolan Post author

      This is a big part of the problem, isn’t it? And not just the Russians and Chinese either. The thing is that AI — among other things — is a weapon. It’s a military weapon, an economic weapon, a propaganda weapon, a “weapon” to create transhumanism, and I’m sure much more. There are WAY too many reasons for all kinds of players out there to keep working it in an unfettered manner.

      1
      1. maggie dyer

        Richard, Great points on the weaponization of AI, and the scary idea of what other counties are cooking up. More people should know this.
        I pray that Musk “finds” you and you are able to give him a proper education re: ET. Not for his sake but for what he controls- X, X AI, and the opinions of the 260 million he says are on his platform.
        Re: Ayn Rand. Im so glad Im not the only one who just couldnt commit to her books. Hey, I tryed.
        Best to you both,
        Maggie

        0
  9. ACTIVEGUARDIAN

    I totally agree with you when you say they could even leave it just where it is– we get NOTHING and congress gets a bunch of bullshit “briefings” where they are showed fuzzy photos and videos and told, “the threat is of such magnitude and we know so little as of yet, you HAVE to give us whatever we want, and we can’t tell you any more for security reasons! Other nations or even alien beings might find out!”

    We get nothing and congress gets a little more bullshit in exchange for a blank check.

Leave a Reply