https://stackdiary.com/zoom-terms-now-allow-training-ai-on-user-content-with-no-opt-out/ Zoom may be the next to pitch themselves onto the pitchforks of machine learning by demanding that they be allowed to use their user data for that purpose.
https://www.biorxiv.org/content/10.1101/2023.07.28.550993v1.full Machine learning will not save you from "garbage in, garbage out" even for the sake of correlating microbial biomes with types of cancer. Probably a good thing to try machine learning on, though.
Is bullshit like this really AI though? The media seems to be throwing that term around a lot lately, and in most cases I don't think it's AI. Because in this case if it actually was intelligent, it would know it was creating chlorine gas. (Or maybe it did know, and it's already too late to stop Skynet.)
It is AI. Remember, the term "AI" is broad and vague. Roombas, for example, do have a form of AI that allows them to vacuum your house, while occasionally throwing themselves down the stairs due to a sense of despondency. This was a failure of the designers to think about the limitations of their software. For years now, there's been websites that if you type in what you have in your fridge, it'll scan through a database of recipes and suggest ones that use the most of the ingredients you've provided. So, it's not taking what you give it and coming up with something on the fly, it's just going through and plugging the list into recipes until it finds one that has most of the ingredients. Another thing that they didn't do, and this is Programming 101, is "sanitize their inputs." This means if you try to enter "bleach" in a food recipe, it'll reject it because it's not on an approved list. And, yes, I cannot stress how basic this is when it comes to teaching people programming. (It's also a lesson a lot of folks fail to learn, because more than one big exploit has been figured out by typing in the wrong kind of data and having it enable hackers to wreak havoc. Seriously, in the mid-80s, when I was taking a computer class in high school, where all we had were TRS-80s, and we had to ensure that if someone typed in something that they weren't supposed to (say numbers instead of letters) the computer immediately rejected it (of course, since computers were such an unknown to most people at this point, we always tried to do it so that the response would make the person who entered the wrong thing think that they'd broken the computer).
It’s AI if LLMs are AI (which, tbf, maybe they shouldn’t be so considered). The problem here is that LLMs are not well-constrained to keep their responses in the problem domain. They don’t have a concept of problem domains. Best you can do is repeatedly give it a prompt that tells it not to create anything toxic to humans (in this case) but you’re still depending on sufficiently strong associations between particular ingredients, and “toxic”, worse, particular ingredient reaction products and “toxic”, and even worse, sufficiently strong associations between ingredients in this recipe creation context and chemical reactions at all. And the reverse is true. If you ask it if a 1:1 mixture of hydrochloric acid and sodium hydroxide would make a good drink ingredient, it’s going to say no, even though the reaction product is just salt water. Edit: we’ll, maybe it’ll get that one, because it’s such a common example, but you get what I mean.
I don't disagree with either of you, @Tuckerfan and @Order2Chaos, I'm just questioning whether the word "intelligence" is appropriate. Perhaps it is, but it still sounds like we're a long ways away from having sentient (?) machines.
This might be a building block, but I doubt it. It's more a infinite number of monkeys on typewriters. Apologies to the monkeys who are already sentient.
As Richard Campbell (the guy from some of the podcasts I've posted in the AI threads) says, "They only call it 'AI,' when it doesn't work, is a new use for the tech, or they've come up with something that they don't know what it's good for. Otherwise, they apply a specific term to it, like LLM." Part of the problem is, we don't have a very good idea of what actual intelligence is. I mean, most people don't know the difference between sentient and sapient, for example. "Sentient" is not, strictly speaking, things like self-awareness and conscious thought. It's simply the ability to feel emotions. (I'm simplifying a tad, but bear with me.) In common parlance, we use it to include things like self-awareness and conscious thought, but those are actually what it means to be sapient. All mammals are sentient, while only a small subset are sapient (possibly limited to just us humans, but I wouldn't be surprised if whales and elephants also turned out to be sapient). Even then, the experts still throw up their hands at exactly what's going on, because we don't know. If we're going to have trouble figuring out what makes us sapient, we're going to have problems not only trying to duplicate it but in knowing if we've succeeded. I was listening to an old radio serial recorded back in the 50s, that was hosted by legendary SF editor John W. Campbell. In his introduction to one episode, he talked about having a robot in his home that did work for him. Now, I know what you're thinking? HTF did this guy get a robot in his house in the 1950s? As he points out, they're actually not that uncommon, we just don't call them robots, we call them "thermostats." But they are a form of automation, and thus, technically, robots. We just don't perceive them as being that. Back to intelligence. If you've owned a pet like a cat or a dog, you know the fact that they lack opposable thumbs and the ability to speak, doesn't mean that they can't communicate. If that food bowl is empty come dinner time, your furry friend will be more than happy to inform you of the fact. How much intelligence does that require? Certainly not as much as this thread, but it does require some. How does that compare to a Roomba or ChatGPT? I don't know. I don't even know how you'd come up with a test to determine that an AI was as intelligent as a domesticated animal, let alone humans. I suspect that this is going to be an area where our understanding of intelligence is only accomplished by creating AI systems and going, "Ah, okay, that's not something we associate with intelligence, so why is it doing that?"
I mean is sentience/sapience/consciousness actually necessary to consider something an AI? Kinda a definitional choice rather than a factual question.
Hey, Jude. Take a bad thought and make it worser. MIT Scientists Create Norman, The World's First "Psychopathic" AI It's been a good run, I guess. We made it about 300K years as a species, and we managed to take out the rest of the life on this planet as well. Go, us, right? I mean, sure, T. rexes look fearsome with their giant teeth, but did they take out a good 90% of the species on the planet? I think not. Take a bow, humans! We outdid the stuff in our nightmares!
I don't think the term "intelligence" is appropriate unless there is true self-awareness. And personally (though I don't have the time to go into my reasons), I wouldn't call it true intelligence unless it could feel emotions and make true choices (though I know there are some who deny that anyone makes true choices -- but I have never heard anything that seems to me to be proof of determinism, or even tend toward actual proof.)
My cat is more sapient than most people. LLMs are just glorified tape recorders. They repeat patterns. If they were allowed to evolve, that is, make changes to their code. It might get interesting at some point.