I'm curious why you tie emotions into intelligence. There are humans with conditions like alexithymia that reduce or remove their conscious perception of emotions and it doesn't seem to make them any less self aware or intelligent. Choice is one where it's very hard and potentially impossible to say for certain if a choice occured. What do we even define as a choice?
A podcast I was listening to pointed out that Google's Magic Eraser for photos (it deletes unwanted items from photos) uses AI to do its job, but it's not advertised as being AI-based software.
A true choice is one where it was a real-world possibility that you could have done or said something else. IOW, not just "some random person other than myself could have done differently". And the reason I tie emotions to it (which I teach in class over a period much too long to condense meaningfully into a quick non-TLDR post) is that without emotions, there is no actual criterion for making choices. "This is the logical choice" always comes up against "But why is the logical choice preferable?" Without that, we necessarily act on the basis of conditioning and what could perhaps be termed "programming".
That's easy enough to create a convincing illusion of in a complicated enough system, to the point where it's probably impossible to distinguish from any choice a human mind makes. Random number generators (RNG'S) are used in even most basic videogame AIs. Even if they aren't explicitly programmed in, there are physical fundamentals of the universe through factors like quantum tunneling that mean we could build something completely intended to behave predictably that would do so in entirely unpredictable ways. Emotions in us seem to be a little like state machine weights, something also used in even very basic videogame AI. There are probabilities of taking particular courses of action, and the weight of those options is adjusted, in much the same way that the hormones associated with emotion seems to influence the pathways in our physical mind. I'm not arguing that those AI (and probably no AI that currently exists on Earth right now) is actually self aware, truly intelligent, or feeling emotions, but it's easy to imagine ways they could be created to appear as though they do have emotions in ways that are indistinguishable from our own physical processes.
Anti-Piracy Group Takes Massive AI Training Dataset 'Books3′ Offline I’m going to skip the copyright discussions, and just go to why the dataset existed in the first place: So people could train their own AI models. Because, as I'm sure we've all experienced, there are times when someone makes something that is "almost" perfect for your needs, but not quite. And if it's the product of a mega-corp, the odds of you being able to modify it to better meet your needs are pretty slim. I have a bunch of books that I'd really like to feed into an AI, so it could pinpoint all the cases of overlap between the works, as I think that'd help track down some pretty interesting information that's largely been overlooked.
Meta's Next Big Open Source AI Dump Will Reportedly Be a Code-Generating Bot If ever there was code that needed to be subjected to an independent audit to make sure it wasn't filled with malware, FB's stuff has got to be near the top of the list.
This is true. And if we aren't there yet, we certainly will be before too long. But to me, that is simulated intelligence, not artificial intelligence. There is a huge difference between fooling people into thinking you have created something artificially, and actually doing it.
The idea that such things would be philosophical zombies seems like a dangerous one, how could we prove such a thing without invoking circular reasoning?
How can we prove what "intelligence" is? I'm sure we've all had the experience where something we have or are dealing with is broken, we can't figure out how to fix it, so we go through a complicated process in order to do this. Maybe it's finding a repair center we can take it to, contacting customer support, or whatever, but in the end we discover that the problem is something minor: We didn't realize that we needed to replace the battery or update the software, or whatever it might be but it was less complicated than the process required to find out how to fix the thing. We're not stupid, yet we couldn't figure out that we needed a new battery until we'd gone through a complicated troubleshooting ritual that wouldn't have been necessary if we'd just swapped the battery out from the get-go. To me, until we can figure out a good definition for what we mean by "intelligence" all we can do is say what we know it not to be. That can only get you so far in figuring out what something is, sooner or later, you have to get into the microscopic details of what intelligence actually is, and that's really fucking hard. It's like how people can say that they definitely do not stand for this or that thing that a political party is saying, but they can't articulate what they do stand for.
I won't accept AI targeting wealthy corporate folk unless it's the "Thank you!" scene from Robocop, but repeated across the planet.
Is that why when Facebook announced that they were building a data center in my town they bragged about how little water it was going to use? Water resources in this area are already stretched thin, and while that water's circulating in the plant, it can't be used for other things. When it was announced, I was hearing farmers bitch that they weren't allowed to pull water from the local streams to irrigate their crops. Of course, that might be for the best, considering how lax the environmental laws here are. I know that the BBQ place I worked for routinely dumped all kinds of nasty chemicals (think the industrial version of Easy-Off oven cleaner) into the stream that ran by the place. Yes, just like with electrical power plants, the water runs through the facility and back out into the supply system, so it's not "wasted" in the sense that it's turned into styrofoam Big Mac containers or something, but while it's being used to enable people to share cat photos, it can't be used for watering crops. In the meantime, it's being used to keep dust down because they're digging through bedrock to build the place, as well as make the concrete for the building. Oh, and I don't have that site bookmarked. I subscribe to their RSS feed.
Data centers use closed loop cooling systems to commercial water-to-air chillers. There is no waste water, except from toilets. The IT facility I work in isn't a data center but makes the same type of statements. It's LEED certified. The wastewater from the fucking toilets is recycled. When the building opened they put signs on the entry to the facilities that the toilets use recycled water: DO NOT DRINK! Interesting Engineering is terrible. They AP article they quoted is hilarious. No data center is going to pump untreated water from a river to cool their systems. The only data center that I worked at that had a cooling pond was for the locomotive engines they used for power backup. This was back in the 80s in Miami. When they ran the generators the parking lot would flood. The real story is the energy data centers consume and the associated carbon footprint.
Ok, I'm sorry, there is some truth to the article, but it's still highly misleading. Some data centers may use evaporative water chillers to cool their closed loop systems. These chillers need water flowing over and evaporating to cool the water that's being used to cool the computers. Whether they use evaporative cooling "fueled" by water, or refrigeration fueled by electricity depends on the availability of water and the price of electricity and water. If the water is free, i.e. pumped from a river, that's cheaper than refrigeration. Some of the water evaporates and some drains back to the source. This is still problematic as water vapor adds to global warming. If this is happening the EPA should get involved. I don't think the amount of water evaporating this way is a big problem but I could be wrong. The water evaporating from irrigated fields and reservoirs is much more. If this is happening in communities where there is a water shortage and they are evaporating city water supplies, then this should be looked at more harshly and the costs applied directly. The communities may have enticed the companies to build their centers with the guarantee of cheap plentiful cooling water. Either way, it's not a problem with AI, it's a problem with data centers already. Here's more: Free WAPO article.
And AI data centers use more resources than regular data centers. The energy usage is said to be 20X that of a Google search. Never let it be said that you didn't pimp for big corps.
https://apnews.com/article/chatgpt-...on-microsoft-f551fde98083d17a7e8d904f8be822c4 Of course, that pales in comparison to the amount of water that the Saudis are using in Arizona. https://apnews.com/article/saudi-ar...-agriculture-0d13957edaf882690e15c0bd9ccfa59f
Well, see, all we have to do to kill AI if it goes Skynet, is turn the water main off, ask it a particularly hard question, and let it fry its processor. Easy-peasy.