Civilians News

Civilians News
"News For All Views"



Welcome To Civilians News - "News For All Views" - Today's date is :


-Sign up for our annual email updates-

Back To The Home-Page

Why I’m Sure That Artificial Intelligence Is Possible

May 28th 2016

– The Dream –

I wanted to think of a project for the Google Science Fair… however I quickly discovered that at 30 years old, I did not qualify.

Sad and disheartened… I then chose to explain my project here on Civilians News instead, where I’ll now go it alone. I might tweet the President about it, however, despite being too old to compete in science fairs, I am going to write a series of articles about my new hobby… building an artificial intelligence machine.

I owe it to hip hop to finish my upcoming music project but after that… this year I plan on diving head first into building my own artificial intelligence robot, at home, by myself.

So it happened yesterday. Yesterday my psyche finally reached it’s concentration and the synapses went off. After countless frustrations with my inability to become noticed making music, I decided to pursue Ai and computer science, as my next project in life.

For years I’ve been fascinated by the thought of artificial intelligence, although yesterday I came to the conclusion that not only is building an artificial intelligence possible but it’s also likely to occur in the very near future. Furthermore, this evolving industry, in that moment of realization, became my new hobby.

What follows is one of my finer moments of intellectual endeavor. I’m very proud of my outline thus far. This is what inspired me to aim towards building an artificial intelligence robot and I believe that with 100% certainty, my model can be refined to create an artificial intelligence machine.

And these were my notes; that after some time, led me to pursue this project.

The AI project.

Notes; Rough Sketch – May 27th 2016 – William Larsen

*****These notes lack clarity and proper grammar, this is literally notes straight from my scientific ramblings… I apologize for the nature of these notes, as they denote my actual thinking, in real time.

The ultimate goal; I think is to get the program/ computer / robot, to think, and be cognitive, and to create and to analyze, while objectively becoming conscious of itself. Like a human. This denotes 2 types of Ai which should be distinguished; Human Ai and computer Ai.

The most human element almost is the objectiveness, it has to be conscious of itself, that is one of those little sprinkles, in the project. But ultimately if it could create and analyze, that would be AI to me, in terms of analyzing, analyzing would stem emotional response’s, but emotions and even knowledge, would be a lesser function, I take that back, no it wouldn't. Emotional response and creativity would actually be exactly the same as analyzing data at it’s fabric, possibly, this is hard to say at this point. (Creating random outputs, refined and refined over time, and emotional responses being interpretations of inputs through filters of the conscious, which I’ll get to later on in this outline are actually similarly complex functions though, yet you don’t think of emotional response as being intelligence). Although it’s funny I first assumed emotional reaction would be the lesser function, although it may be, they also actually might be functions which are exactly the same in complexity, which again is another one of the intricacies that lured me to the project. Emotions would theoretically be programmed almost like emotional, “filters.” I’d sketch it out something like;

DB 1) The reptilian brain functions, overlying factors, for a machine, it would be like, not getting wet and s***. But you could program it like a person theoretically. For a human you’d do like subconscious s***, like heart rate and breathing, eating… human needs.

DB 2) Would be the ego, and it would interact with the reptilian DB, to filter through inputs and create an output.

The tricky part is the, “consciousness,” element of it.

Because it creates philosophical issues.

Is the machine just filtering information and inputs and creating outputs, or is it concious of itself? That\may might be the hardest part. Is consciousness a streamline/processing of sub conscious thought? Or is it a manifestation of something else? That’s kind of the heart of the project.

I theorize it’s a combination of both… and if the formulas could be worked out for the inter-play between reality and perception, and how the brain breaks it down, then I think a, “conscious,” could be reverse engineered, in theory. Or to be more specific, you create the filters that fit to create the life, then try to create that special something that objectifies it and floats through the program… the conscious. I want to reverse engineer a conscious. More or less.

What if the conscious was a program, in a program, and it had instinctual needs too? What if it was just filters over filters and really humanity was just a domino effect of events, like there’s a lot of little idiosyncrasy to it. I’d like to think it’s bigger than that. It has to be objective of itself. So then the key ingredient in life would be... thought? it’s objectification, that’s the concious, to me. And creating that, is the real trick maybe, maybe that’s the light particle/wave issue. Like, there’s something in there that makes it really hard, otherwise it’d be done already.

Because the sheer thought of a conscious computer system seems so feasible though. It could just work off of, “sub conscious,” computations and store data like memories, impacting later outputs, but like, how does it become aware of itself… is my real issue.

The computer can’t question what it’s doing… but why not? Basically. You need a, “why not,” factor in there, so the computer acts randomly, but then it ends up looking stupid and it gets embarrassed. Maybe that’s part of it. It has to question itself.

I dunno.

What if it were 2 brains. There’s a consciousness inside the sub conscious. So the sub-conscious seems easy to replicate, it’s the conscious that is hard for me to figure out. The sub conscious almost like hints to the conscious, its almost like a suggestion, how to feel… then there’s a free will element, also in the conscious.

That’s the scary part… because you have to program it to question everything, and potentially react irrationally, yet, humans, in the reptilian brain have natural desires, 1) not to die… 2) not to kill, the reptilian brain almost has this innate ethical guideline too, so maybe that is a deeper layer of the sub conscious, that basically forbids those things.

But people kill… in fact people do all sorts of horrible s***, so like… again, to create an, “objective,” conscious is scary and the issue, “the objectivity,” is the problem. Or is it?

***********How do you create a computer objective of itself?

See the end results of the experiment are mind numbingly cool, but the hard part is tricky. So you can create, like some Google glass type apparatus and a numeric scale for analyzing data…. and also create, “emotions,” based on perspective, vs expectations, which could even be somewhat randomized. And like, that could create some sort of sub conscious computations, but then how does the, “conscious,” interpret those computations, and to a certain extent then comb through that data, all with a degree of randomness?

What if that degree of randomness was the conscious. The formula for randomness, could inevitably then become the program, in the program, to represent a conscious, but that is even scarier, because that means that your perception of randomness represents human nature. Which is scary, but if you wired the reptilian brain over the conscious it could eliminate any desires to kill, or do anything crazy, and in theory that might work… but how does it interact?

Because ultimately, it has no emotion. Where’s the emotion in randomness? Or is emotion then, the numeric filters, that affect randomness? If Randomness = conscious objectivity, in the experiment, then does emotion equate to numeric assessments of memories- when evaluated by perception vs reality of past events, interacting with current judgment, or creating more sophisticated pre-dispositions?

Or are dispositions emotions? Maybe. There might be some direct relationships like that in the computations, which is also another really cool intricacy of this experiment… really cool stuff.

This is why I could be good at this… maybe this is my calling.

So how do you make the machine objectively cry? Or have any emotion… Well if emotion is just a filter, in this case, or equates to predispositions which are constantly changing based on memories, deposited into the ego, through memories=perception vs reality, from birth to present. Say you programmed the emotion as like a filter, or a product of predispositions, in either case, then is the emotion a wave?

***I’m telling you God it’s possible!

And that was how I decided to build an Ai robot, just randomly writing.

-William Larsen