AI-Artificial Intelligence
AI-Artificial Intelligence Utilizing Bayesian Regression Statistics as a function of machine learning and independent decision making
Overview - Creation of artificial life that is given certain data and direction in order to initiate decision making but utilizing Bayesian Theory in order to grow and truly interact/learn from its environment. In essence making independent decisions much as humans do.
This would be excellent for robots engaged in sales,labor,law enforcement and the military.
IP - Mentioned in second patent as a method for our model to update and recreate itself continually in reaction to new stimuli or information.
Advantages to Current Technology -There is no current mature technology that allows for machines to make rational decisions on their own as how to interact with their environment though work is being done on this at the present time
Start Date - Initially our basic model will incorporate this sort of rational though actual work will begin in 2020.
Team - TBD
Business Model - Traditional corporation and channel sales/marketing
Deliverables - TBD
COST - $2 Billion USD
ROI - Successful design and development would alter life as we know it and the potential sales and military use would command an extremely high value.
The Artificial Intelligence R&D direction of GSI IT.
GSI IT has always followed the trend of world science and technology and keeps pace with the times, and is also committed to developing of AI-based projects, we have proprietary algorithms and methods in place to build applications that utilize machine learning/AI to organize input data, evaluate potential solutions and ultimately make decisions. In the US machine learning or AI is basically considered a branch of applied statistics. In this respect, GSI is focused on developing decision-making models built to handle large amounts of source data (Big Data Analytics). To be clear GSI is not focused on creating Neural Networks or other architectural designs in this stage but rather on the “consciousness” or what goes on inside those networks and how and why decisions will be made in multi-functional applications.
In just about any AI model you will find a statistical foundation. Without this foundation, the model would have no ability to “infer”, “predict” or aka “think.” Statistics is the basis of machine learning. Here is a short video from the AI/Machine Learning Department at Columbia University: https://youtu.be/-PCFxkWcatg. This video outlines and qualifies the commonality between applied statistics and current thought on machine learning.
In his 2012 Article in Communications of the ACM, “A Few Useful Things to Know About Machine Learning’ (Oct 2012, Vol 55) Pedro Domingues outlines algorithms key to machine learning including logistic regression and Bayesian:
As an example as written in US patent 9374608:
users’ choices are put into a latent variable binomial logistic regression model similar to the algorithm:
Y l 0*=β0 −X l+ε0 Y l 1*=⊕1 −X l+ε1
where ε0 ̃EV1(0,1)
ε1 ̃EV1(0,1)
EV1(0,1) is a standard type-1 extreme value distribution: i.e. then Pr(ε0=χ)=Pr(ε1=χ)=e −χ e −e −χ
While this formula may be confusing and difficult to understand the key component is that we are using one independent variable such as “hours of sun” to predict another dependent variable such as “quality of soybeans”. Our model also continuously optimizes its own algorithm by looking at input data and making decisions as to potential new independent variables, for instance, “amount of rainfall” and subsequently shifting the model to account for the new input.