Knowledge Gaps And Chat-Bots

January 10th 2019 – By: William Larsen – Civilians News – “News For All Views” 

– Knowledge Gaps And Chat-Bots –

I’ve almost completed my first Ai software and it’s a chat-bot with, “human like,” personality traits. Yet, in doing so… I’ve also come to find that a number of things are greatly fulfilling about the science behind Artificial Intelligence and most notably, how simple it can be… and take chat-bot software, for example.

Whereby, chat-bots themselves are really just a simple form of Ai but take that concept 1 step further… one of the simplest designs for a chat-bot, is on the home page of ArtificialIntelligenceProject.com for illustration purposes. Whereby, this design takes inputs… and then treats those inputs as, “information that the machine has heard or learned,” which it then repeats back at a ratio of 1:1, mimicking human conversation.

And that’s 1 design for a chat-bot, albeit a simplistic illustration of how Ai could work, quite literally in it’s simplest form. (*But what if bugs/insects actually think that way?) Nevertheless, as I finish building a much more complex 2.0 model of that Ai software, I’ve come to realize that there are a great number of variances, as to how to build an Ai or chat-bot software.

Which is to say that there are a number of varying complexities and types of chat-bots (be it primitive Ai or not) that one could build. And for example, Mitsuku Chat-Bot, winner of the 2018 Loebner Prize (a Turing Test competition), is an Ai software that’s programmed to directly respond to questions, with a previously programmed response. Which again, is to say that the creator has gone over every possible question, or phrase, in which a person can input and assigned it to a set response, then Mitsuku’s programmer has refined those responses over the course of 10-15 years of programming.

Which is another way of programming a chat-bot.

However, let’s examine another simple, yet slightly more complex means for going about this task. Whereby, Google, Microsoft… companies like that… probably started building their chat-bots and Ai software, in a more mathematically based Ai model. (Which I think it’s fairly safe to assume).

But for example; every inputted word in these models… is then attributed to a numerical value and then those numerals are expressed in equations, giving you simple, mathematically based Ai.

Word 1…. + Word 2 + Word 3 =   X % / or the probability of Word (x), or Word (z), or Word (q).

And in fact, Google has so much search data that this method in itself… could create a very human like Ai (or chat-bot) software relatively easily, simply based off of the algorithms and probabilities of words people connect through everyday search and browsing preferences.

And that data could then create a very powerful Ai software, based upon over arching trends in the data. Yet, take that concept and make it more complex, say subtract a word (or 2) and check the probabilities again… then begin creating algorithms. For example, would; (word1) + (word2) have a greater probability of eliciting (word3) from the respondent if (wordX) was inserted into the equation or (wordY)?

And in this way, you could create a very powerful Ai software, relatively quickly simply based upon mathematical reasoning (however, also note, that that version of Ai… would be very powerful but also non-human/robotic in nature). However, consider how Ai can grow in accuracy, or complexity by “branching off of,” pillars within the software or psychological equations, so to speak which form through underlying trends in that data. (*Or in this case 1 hinging equation.)

And my personal designs (on ArtificialIntelligenceProject.com are actually more complicated than that, but to some extent, using a formula like the one above in order to code an Ai is sensible, for some computer learning models. Yet, also note how similar chat-bots are to, “search engines,” in this way. Whereby, I’ll explain how I’m coding my own Ai chat bot / software, after I finish it, later on this Summer but that’s 2-3 ways of programming a chat bot / Ai with very little real complexity.

And these are some of the utmost primitive forms of Ai but they Ai none-the-less and semi-powerful forms of it. However, I also find it fascinating to think of this psychology through the lens of computer science, in this way and in terms of brain organism doing the exact same function biologically, IE neural networks. Whereby, these are simple, surface level complexities of Ai and as a deeper personality adds equations, filters, algorithms and things of that nature, the Ai can grow more and more human!

Which then leads me to question the, “knowledge gaps,” in our education system today! Because it’s not that complex and yet this is surely the future of computing!

And as the founder of an Ai startup all by meself (PDX Larsen LLC), in a field that’s just beginning to reach it’s relatively primitive peaks and valleys, with a somewhat arbitrary science behind it… this field raises many questions concerning the business of science itself. But then again, cognitive dissonance is an interesting topic all to itself.

Nevertheless, long story short because of the arbitrary nature of psychology combined with computer science, which has led to these early forms and early stages of Ai, I’ve personally decided to make my own Ai designs, for my next Ai software and chat-bots, which I will also make public information, right here on CiviliansNews.com as I progress with my work. (*Be the change).

And let’s see what the future holds.

Whereby, design specs for my *Ai 2.0 software (still un-named) will be printed, right here on CiviliansNews.com, later on.

-William Larsen, Civilians News “News For All Views.”