- Feb 25, 2013
- 51,230
- 58,301
- AFL Club
- Brisbane Lions
Not reallyIt would pretty much end the debate on an afterlife.....
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Weekly Prize - Join Any Time - Tip Round 13
The Golden Ticket - MCG and Marvel Medallion Club tickets and Corporate Box tickets at the Gabba, MCG and Marvel.
EUFA EURO 2024 - Group Stage ⚽ EPL 24/25 starts Aug 17
Not reallyIt would pretty much end the debate on an afterlife.....
I'm guessing that would all depend on how much we found out about the simulation or who's running it and why.Who the **** cares if it's a simulation or if it'd real or not real or whatever. It doesn't change anything or give your life any more or less meaning
Agree with your time and consciousness comment but if we don't understand enough of the world isn't anything plausible?Until we understand more about the true nature of time and consciousness, shit like "simulation" will seem plausible.
Reality is we just don't understand enough about the world we live in yet.
Agree with your time and consciousness comment but if we don't understand enough of the world isn't anything plausible?
What makes you feel simulation is so xxxx?
I must say while I believe the theory has some merit the first thing that I wondered was how things like grief and pain fit in. Eating and simulated food was also something I struggle to understand any meaning for.Just seems like a kind of techy easy answer to a far harder question - re consciousness and especially time.
We don't fully understand sleep yet, but we spend a quarter of our lives doing it.
We don't understand consciousness and emotion, things like love, grief, nostalgia.
I suspect when we get our heads around them, especially when we work out that many animals are sentient to the level we are, just coming at things differently, ideas will change.
Until we understand more about the true nature of time and consciousness, shit like "simulation" will seem plausible.
Reality is we just don't understand enough about the world we live in yet.
Interesting use of those terms.
We understand much about physically measured reality of where and when we exist. There is more we do not know, than know. Manuel from Fawlty Towers was a wise Spanish bugger. "I know nothing"
But open up the question about whether there is life at all ???It would pretty much end the debate on an afterlife.....
Who the **** cares if it's a simulation or if it'd real or not real or whatever. It doesn't change anything or give your life any more or less meaning
But then why would the computers tell us? Wouldn't they side with the simulation?I wonder how close we are to finding out if in fact we are living in a simulation?
There is already a network of computers around the world,and most people even have computers in their phones.
Once we get to the point that we can develop o computer to think for itself we could almost come up with the answer within days.
A computer with the right processing power will be able to learn in the matter of seconds what it would take the human brain to learn over many years.
I don't think we are as far away as some might think in regards to how close we could be to developing our own simulated universe.
That of course will open up the question as to whether or not we are the first and whether or not the original creators will in fact allow us to carry on once we reach such a point.
Well that's an interesting point.But then why would the computers tell us? Wouldn't they side with the simulation?
Do you feel we are on the verge of singularity or we have already entered the embryonic stages of it?Computer power will be trillions of times what it is currently. Anything is possible.
Best way to enjoy the one certainty on this planet...enjoy what you know now, when you're dead you're dead, is there big footy when you're deadInteresting use of those terms.
We understand much about physically measured reality of where and when we exist. There is more we do not know, than know. Manuel from Fawlty Towers was a wise Spanish bugger. "I know nothing"
Do you feel we are on the verge of singularity or we have already entered the embryonic stages of it?
It would appear we have already set up e digital network around the planet and are already entrusting computers to perform day to day artivities such as banking,shopping etc. Entertainment is becoming more and more based around computers and in many circumstances favoured to actually going outside our digital world.
Agreed,and a good what of putting it.I think that's one way of looking it: the framework for singularly is in place. The advances in technology are happening exponentially so the changes could come very quickly from here.
If you look at cloud computing, the Internet of things, Li-Fi etc. everything is becoming more connected without the need for physical wiring.
Well that's very interesting. Not sure Twitter is a great place for AI to learn but that's obviously beside the point.gates makes a robot that learns, she gets too smart, they turn her off ...
Tay was an artificial intelligence chatterbot released by Microsoft Corporation on March 23, 2016. Tay caused controversy on Twitter by releasing inflammatory tweets and it was taken offline around 16 hours after its launch.[1] Tay was accidentally reactivated on March 30, 2016, and then quickly taken offline again.
The bot was created by Microsoft's Technology and Research and Bing divisions,[2] and named "Tay" after the acronym "thinking about you".[3] Although Microsoft initially released few details about the bot, sources mentioned that it was similar to or based on Xiaoice, a similar Microsoft project in China.[4] Ars Technica reported that, since late 2014 Xiaoice had had "more than 40 million conversations apparently without major incident".[5] Tay was designed to mimic the language patterns of a 19-year-old American girl, and to learn from interacting with human users of Twitter.[6]
Tay was released on Twitter on March 23, 2016 under the name TayTweets and handle @TayandYou.[7] It was presented as "The AI with zero chill".[8] Tay started replying to other Twitter users, and was also able to caption photos provided to it into a form of Internet memes.[9] Ars Technica reported Tay experiencing topic "blacklisting": Interactions with Tay regarding "certain hot topics such as Eric Garner (killed by New York police in 2014) generate safe, canned answers".[5]
Within a day, the robot was releasing racist, sexually-charged messages in response to other Twitter users.[6] Examples of Tay's tweets on that day included, "Bush did 9/11" and "Hitler would have done a better job than the monkey Barack Obama we have got now. Donald Trump is the only hope we've got",[8] as well as "Fk my robot pus daddy I'm such a naughty robot."[10] It also captioned a photo of Adolf Hitler with "swag alert" and "swagger before the internet was even a thing".[9]
Artificial intelligence researcher Roman Yampolskiy commented that Tay's misbehavior was understandable, because it was mimicking the deliberately offensive behavior of other Twitter users, and Microsoft had not given the bot an understanding of inappropriate behavior. He compared the issue to IBM's Watson, which had begun to use profanity after reading the Urban Dictionary.[2][11] Many of Tay's inflammatory tweets were a simple exploitation of Tay's "repeat after me" capability;[12] it is not publicly known whether this "repeat after me" capability was a built-in feature, or whether it was a learned response or was otherwise an example of complex behavior.[5] Not all of the inflammatory responses involved the "repeat after me" capability; for example, Tay responded to a question on "Did the Holocaust happen?" with "It was made up ".[12]
https://en.wikipedia.org/wiki/Tay_(bot)
[NSFW] - BGates makes robot AI, kills her after she prouts too much truth
I wonder for the experiment to truly work if they should have released her without any noise and see if she self adjusted to responses.gates makes a robot that learns, she gets too smart, they turn her off ...
Tay was an artificial intelligence chatterbot released by Microsoft Corporation on March 23, 2016. Tay caused controversy on Twitter by releasing inflammatory tweets and it was taken offline around 16 hours after its launch.[1] Tay was accidentally reactivated on March 30, 2016, and then quickly taken offline again.
The bot was created by Microsoft's Technology and Research and Bing divisions,[2] and named "Tay" after the acronym "thinking about you".[3] Although Microsoft initially released few details about the bot, sources mentioned that it was similar to or based on Xiaoice, a similar Microsoft project in China.[4] Ars Technica reported that, since late 2014 Xiaoice had had "more than 40 million conversations apparently without major incident".[5] Tay was designed to mimic the language patterns of a 19-year-old American girl, and to learn from interacting with human users of Twitter.[6]
Tay was released on Twitter on March 23, 2016 under the name TayTweets and handle @TayandYou.[7] It was presented as "The AI with zero chill".[8] Tay started replying to other Twitter users, and was also able to caption photos provided to it into a form of Internet memes.[9] Ars Technica reported Tay experiencing topic "blacklisting": Interactions with Tay regarding "certain hot topics such as Eric Garner (killed by New York police in 2014) generate safe, canned answers".[5]
Within a day, the robot was releasing racist, sexually-charged messages in response to other Twitter users.[6] Examples of Tay's tweets on that day included, "Bush did 9/11" and "Hitler would have done a better job than the monkey Barack Obama we have got now. Donald Trump is the only hope we've got",[8] as well as "Fk my robot pus daddy I'm such a naughty robot."[10] It also captioned a photo of Adolf Hitler with "swag alert" and "swagger before the internet was even a thing".[9]
Artificial intelligence researcher Roman Yampolskiy commented that Tay's misbehavior was understandable, because it was mimicking the deliberately offensive behavior of other Twitter users, and Microsoft had not given the bot an understanding of inappropriate behavior. He compared the issue to IBM's Watson, which had begun to use profanity after reading the Urban Dictionary.[2][11] Many of Tay's inflammatory tweets were a simple exploitation of Tay's "repeat after me" capability;[12] it is not publicly known whether this "repeat after me" capability was a built-in feature, or whether it was a learned response or was otherwise an example of complex behavior.[5] Not all of the inflammatory responses involved the "repeat after me" capability; for example, Tay responded to a question on "Did the Holocaust happen?" with "It was made up ".[12]
https://en.wikipedia.org/wiki/Tay_(bot)
[NSFW] - BGates makes robot AI, kills her after she prouts too much truth
This is a good point but maybe the experiment did work as far as the capacities the creator knew the robot has at this point in timeI wonder for the experiment to truly work if they should have released her without any noise and see if she self adjusted to responses.