Mystery Is the Universe a simulation?

Remove this Banner Ad

Who the **** cares if it's a simulation or if it'd real or not real or whatever. It doesn't change anything or give your life any more or less meaning
I'm guessing that would all depend on how much we found out about the simulation or who's running it and why.
If it was a preprogrammed simulation for example where everything that happens is already decided some people are going to feel their existence inside the simulation is pointless. Others though might just start treating their life as a movie and just enjoy what happens even though their day to day decisions have already been decided.
And how would we treat or feel about good and evil. If someone commits a major crime would people start thinking it's not really their fault it's the simulations,same with good things,maybe we would no longer think Brisbanes hat trick of flags was a great achievement and just think they got lucky the simulators chose them rather than someone else how could they lose.
 

Log in to remove this ad.

Until we understand more about the true nature of time and consciousness, s**t like "simulation" will seem plausible.

Reality is we just don't understand enough about the world we live in yet.
 
Until we understand more about the true nature of time and consciousness, s**t like "simulation" will seem plausible.

Reality is we just don't understand enough about the world we live in yet.
Agree with your time and consciousness comment but if we don't understand enough of the world isn't anything plausible?
What makes you feel simulation is so xxxx?
 
Agree with your time and consciousness comment but if we don't understand enough of the world isn't anything plausible?
What makes you feel simulation is so xxxx?

Just seems like a kind of techy easy answer to a far harder question - re consciousness and especially time.

We don't fully understand sleep yet, but we spend a quarter of our lives doing it.

We don't understand consciousness and emotion, things like love, grief, nostalgia.

I suspect when we get our heads around them, especially when we work out that many animals are sentient to the level we are, just coming at things differently, ideas will change.
 
Just seems like a kind of techy easy answer to a far harder question - re consciousness and especially time.

We don't fully understand sleep yet, but we spend a quarter of our lives doing it.

We don't understand consciousness and emotion, things like love, grief, nostalgia.

I suspect when we get our heads around them, especially when we work out that many animals are sentient to the level we are, just coming at things differently, ideas will change.
I must say while I believe the theory has some merit the first thing that I wondered was how things like grief and pain fit in. Eating and simulated food was also something I struggle to understand any meaning for.
 
Until we understand more about the true nature of time and consciousness, s**t like "simulation" will seem plausible.

Reality is we just don't understand enough about the world we live in yet.

Interesting use of those terms.
We understand much about physically measured reality of where and when we exist. There is more we do not know, than know. Manuel from Fawlty Towers was a wise Spanish bugger. "I know nothing"
 
Interesting use of those terms.
We understand much about physically measured reality of where and when we exist. There is more we do not know, than know. Manuel from Fawlty Towers was a wise Spanish bugger. "I know nothing"

I think what will really change things is the very imminent revolution in the ability to control, or at least understand, the ageing process.

In our heads - consciousness - time is hooked to a linear birth and death narrative.

Even in the way we try and explain the Big Stuff that innate way of thinking persists - we talk about the Big Bang and how the universe will collapse.

Once we realise that ageing is effectively the less than hyper efficient arrangement of the molecules that make up our bodies - and the other "living" things - views on what constitutes time and what is life and death and beginning and end will be very different.

I realise this makes me sound like I've woken and smashed a few brekky bongs but consider that in some regards we are the intellectual equivalents of 1400s European peasants. People who lived and died with 30 miles of their village - usually far closer - unless they got to go overseas on war.

The technology we have, the way we live, our opportunities, as someone smarter than we once said, we would appear as magicians or gods to them. Imagine what it will be like 650 odd years from now? Even our grandkids will lead lives barely recognisable to us. (If they are rich!)
 
I wonder how close we are to finding out if in fact we are living in a simulation?
There is already a network of computers around the world,and most people even have computers in their phones.
Once we get to the point that we can develop o computer to think for itself we could almost come up with the answer within days.
A computer with the right processing power will be able to learn in the matter of seconds what it would take the human brain to learn over many years.
I don't think we are as far away as some might think in regards to how close we could be to developing our own simulated universe.
That of course will open up the question as to whether or not we are the first and whether or not the original creators will in fact allow us to carry on once we reach such a point.
 
I wonder how close we are to finding out if in fact we are living in a simulation?
There is already a network of computers around the world,and most people even have computers in their phones.
Once we get to the point that we can develop o computer to think for itself we could almost come up with the answer within days.
A computer with the right processing power will be able to learn in the matter of seconds what it would take the human brain to learn over many years.
I don't think we are as far away as some might think in regards to how close we could be to developing our own simulated universe.
That of course will open up the question as to whether or not we are the first and whether or not the original creators will in fact allow us to carry on once we reach such a point.
But then why would the computers tell us? Wouldn't they side with the simulation?
 

(Log in to remove this ad.)

But then why would the computers tell us? Wouldn't they side with the simulation?
Well that's an interesting point.
I guess it would have to some down to the programme we are in and why.
It's certainly feasible that it would be written into the programme for it to side with the original programme.
There's also the possibility that if we started running our own simulation we might start using to much processing power from the original simulation that will force it to shut us down.
If we are here to gather information and knowledge for the original creators I guess they would be happy to sit back and allow us to continue.
 
Computer power will be trillions of times what it is currently. Anything is possible.
Do you feel we are on the verge of singularity or we have already entered the embryonic stages of it?
It would appear we have already set up e digital network around the planet and are already entrusting computers to perform day to day artivities such as banking,shopping etc. Entertainment is becoming more and more based around computers and in many circumstances favoured to actually going outside our digital world.
 
Interesting use of those terms.
We understand much about physically measured reality of where and when we exist. There is more we do not know, than know. Manuel from Fawlty Towers was a wise Spanish bugger. "I know nothing"
Best way to enjoy the one certainty on this planet...enjoy what you know now, when you're dead you're dead, is there big footy when you're dead
 
Do you feel we are on the verge of singularity or we have already entered the embryonic stages of it?
It would appear we have already set up e digital network around the planet and are already entrusting computers to perform day to day artivities such as banking,shopping etc. Entertainment is becoming more and more based around computers and in many circumstances favoured to actually going outside our digital world.

I think that's one way of looking it: the framework for singularly is in place. The advances in technology are happening exponentially so the changes could come very quickly from here.

If you look at cloud computing, the Internet of things, Li-Fi etc. everything is becoming more connected without the need for physical wiring.
 
I think that's one way of looking it: the framework for singularly is in place. The advances in technology are happening exponentially so the changes could come very quickly from here.

If you look at cloud computing, the Internet of things, Li-Fi etc. everything is becoming more connected without the need for physical wiring.
Agreed,and a good what of putting it.
I've heard some theories that singularity could be achieved as soon as 2025-30 and I think this is certainly possible which would mean in our lifetimes.
As I said we could certainly go rheir in a matter of days if not less once computing reaches the level required.
I think it would certainly mean our calendars will adjust to the year 01 like it did for AD etc. and the world we are in now will immediately seem primitive.
 
gates makes a robot that learns, she gets too smart, they turn her off ( / kill?) ...

tay_20160324_hitler_512.jpg


Tay was an artificial intelligence chatterbot released by Microsoft Corporation on March 23, 2016. Tay caused controversy on Twitter by releasing inflammatory tweets and it was taken offline around 16 hours after its launch.[1] Tay was accidentally reactivated on March 30, 2016, and then quickly taken offline again.

The bot was created by Microsoft's Technology and Research and Bing divisions,[2] and named "Tay" after the acronym "thinking about you".[3] Although Microsoft initially released few details about the bot, sources mentioned that it was similar to or based on Xiaoice, a similar Microsoft project in China.[4] Ars Technica reported that, since late 2014 Xiaoice had had "more than 40 million conversations apparently without major incident".[5] Tay was designed to mimic the language patterns of a 19-year-old American girl, and to learn from interacting with human users of Twitter.[6]

Tay was released on Twitter on March 23, 2016 under the name TayTweets and handle @TayandYou.[7] It was presented as "The AI with zero chill".[8] Tay started replying to other Twitter users, and was also able to caption photos provided to it into a form of Internet memes.[9] Ars Technica reported Tay experiencing topic "blacklisting": Interactions with Tay regarding "certain hot topics such as Eric Garner (killed by New York police in 2014) generate safe, canned answers".[5]

Within a day, the robot was releasing racist, sexually-charged messages in response to other Twitter users.[6] Examples of Tay's tweets on that day included, "Bush did 9/11" and "Hitler would have done a better job than the monkey Barack Obama we have got now. Donald Trump is the only hope we've got",[8] as well as "Fk my robot pus daddy I'm such a naughty robot."[10] It also captioned a photo of Adolf Hitler with "swag alert" and "swagger before the internet was even a thing".[9]

Artificial intelligence researcher Roman Yampolskiy commented that Tay's misbehavior was understandable, because it was mimicking the deliberately offensive behavior of other Twitter users, and Microsoft had not given the bot an understanding of inappropriate behavior. He compared the issue to IBM's Watson, which had begun to use profanity after reading the Urban Dictionary.[2][11] Many of Tay's inflammatory tweets were a simple exploitation of Tay's "repeat after me" capability;[12] it is not publicly known whether this "repeat after me" capability was a built-in feature, or whether it was a learned response or was otherwise an example of complex behavior.[5] Not all of the inflammatory responses involved the "repeat after me" capability; for example, Tay responded to a question on "Did the Holocaust happen?" with "It was made up ".[12]

https://en.wikipedia.org/wiki/Tay_(bot)

[NSFW] - BGates makes robot AI, kills her after she prouts too much truth

RCaXkge.jpg
 
Last edited:
gates makes a robot that learns, she gets too smart, they turn her off ...

tay_20160324_hitler_512.jpg


Tay was an artificial intelligence chatterbot released by Microsoft Corporation on March 23, 2016. Tay caused controversy on Twitter by releasing inflammatory tweets and it was taken offline around 16 hours after its launch.[1] Tay was accidentally reactivated on March 30, 2016, and then quickly taken offline again.

The bot was created by Microsoft's Technology and Research and Bing divisions,[2] and named "Tay" after the acronym "thinking about you".[3] Although Microsoft initially released few details about the bot, sources mentioned that it was similar to or based on Xiaoice, a similar Microsoft project in China.[4] Ars Technica reported that, since late 2014 Xiaoice had had "more than 40 million conversations apparently without major incident".[5] Tay was designed to mimic the language patterns of a 19-year-old American girl, and to learn from interacting with human users of Twitter.[6]

Tay was released on Twitter on March 23, 2016 under the name TayTweets and handle @TayandYou.[7] It was presented as "The AI with zero chill".[8] Tay started replying to other Twitter users, and was also able to caption photos provided to it into a form of Internet memes.[9] Ars Technica reported Tay experiencing topic "blacklisting": Interactions with Tay regarding "certain hot topics such as Eric Garner (killed by New York police in 2014) generate safe, canned answers".[5]

Within a day, the robot was releasing racist, sexually-charged messages in response to other Twitter users.[6] Examples of Tay's tweets on that day included, "Bush did 9/11" and "Hitler would have done a better job than the monkey Barack Obama we have got now. Donald Trump is the only hope we've got",[8] as well as "Fk my robot pus daddy I'm such a naughty robot."[10] It also captioned a photo of Adolf Hitler with "swag alert" and "swagger before the internet was even a thing".[9]

Artificial intelligence researcher Roman Yampolskiy commented that Tay's misbehavior was understandable, because it was mimicking the deliberately offensive behavior of other Twitter users, and Microsoft had not given the bot an understanding of inappropriate behavior. He compared the issue to IBM's Watson, which had begun to use profanity after reading the Urban Dictionary.[2][11] Many of Tay's inflammatory tweets were a simple exploitation of Tay's "repeat after me" capability;[12] it is not publicly known whether this "repeat after me" capability was a built-in feature, or whether it was a learned response or was otherwise an example of complex behavior.[5] Not all of the inflammatory responses involved the "repeat after me" capability; for example, Tay responded to a question on "Did the Holocaust happen?" with "It was made up ".[12]

https://en.wikipedia.org/wiki/Tay_(bot)

[NSFW] - BGates makes robot AI, kills her after she prouts too much truth

RCaXkge.jpg
Well that's very interesting. Not sure Twitter is a great place for AI to learn but that's obviously beside the point.
When we do create an advanced artificial intelligence or when we were developed,one problem that is going to face the creator,or was previously faced, is to be able to control the AI,as is seen here in a small regard.
What would be a simple,non harmful to the to world outside that of the AI to control this intelligence? Maybe by creating a seperate world for it to exist,a world it's unable to escape from? Some kind of simulated universe perhaps? A world where we can observe it but it has no idea about our existence.
 
There is pretty much coding everywhere.

The string theory they found coding that we developed back in the 40s

Our own DNA has computer coding.

This theory would lend to thought of a creator.
 
gates makes a robot that learns, she gets too smart, they turn her off ...

tay_20160324_hitler_512.jpg


Tay was an artificial intelligence chatterbot released by Microsoft Corporation on March 23, 2016. Tay caused controversy on Twitter by releasing inflammatory tweets and it was taken offline around 16 hours after its launch.[1] Tay was accidentally reactivated on March 30, 2016, and then quickly taken offline again.

The bot was created by Microsoft's Technology and Research and Bing divisions,[2] and named "Tay" after the acronym "thinking about you".[3] Although Microsoft initially released few details about the bot, sources mentioned that it was similar to or based on Xiaoice, a similar Microsoft project in China.[4] Ars Technica reported that, since late 2014 Xiaoice had had "more than 40 million conversations apparently without major incident".[5] Tay was designed to mimic the language patterns of a 19-year-old American girl, and to learn from interacting with human users of Twitter.[6]

Tay was released on Twitter on March 23, 2016 under the name TayTweets and handle @TayandYou.[7] It was presented as "The AI with zero chill".[8] Tay started replying to other Twitter users, and was also able to caption photos provided to it into a form of Internet memes.[9] Ars Technica reported Tay experiencing topic "blacklisting": Interactions with Tay regarding "certain hot topics such as Eric Garner (killed by New York police in 2014) generate safe, canned answers".[5]

Within a day, the robot was releasing racist, sexually-charged messages in response to other Twitter users.[6] Examples of Tay's tweets on that day included, "Bush did 9/11" and "Hitler would have done a better job than the monkey Barack Obama we have got now. Donald Trump is the only hope we've got",[8] as well as "Fk my robot pus daddy I'm such a naughty robot."[10] It also captioned a photo of Adolf Hitler with "swag alert" and "swagger before the internet was even a thing".[9]

Artificial intelligence researcher Roman Yampolskiy commented that Tay's misbehavior was understandable, because it was mimicking the deliberately offensive behavior of other Twitter users, and Microsoft had not given the bot an understanding of inappropriate behavior. He compared the issue to IBM's Watson, which had begun to use profanity after reading the Urban Dictionary.[2][11] Many of Tay's inflammatory tweets were a simple exploitation of Tay's "repeat after me" capability;[12] it is not publicly known whether this "repeat after me" capability was a built-in feature, or whether it was a learned response or was otherwise an example of complex behavior.[5] Not all of the inflammatory responses involved the "repeat after me" capability; for example, Tay responded to a question on "Did the Holocaust happen?" with "It was made up ".[12]

https://en.wikipedia.org/wiki/Tay_(bot)

[NSFW] - BGates makes robot AI, kills her after she prouts too much truth

RCaXkge.jpg
I wonder for the experiment to truly work if they should have released her without any noise and see if she self adjusted to responses.
 
I wonder for the experiment to truly work if they should have released her without any noise and see if she self adjusted to responses.
This is a good point but maybe the experiment did work as far as the capacities the creator knew the robot has at this point in time
It could be they were testing to see how the robot would go at pattern recognition,which it appears to have the capabilities of doing. The next step may be to add something like sense making to this recognition and then further experiments will take place,to see how it might adjust to the information it is learning.
 

Remove this Banner Ad

Back
Top