Skip to main content

Probable Methods in Evolving Self-Generating Artificial Intelligence through Innovative Leaps in ASIC Design

It was early 2017 when Google implemented automatic programming, the concept of programs creating other programs to perform a specific task, into their Artificial Intelligence (AI). Automated algorithm design (AutoML or automatic machine learning) could generate code with performance numbers drastically higher than their man-made counterparts. Google applied the code to image identification and the accuracy of detection reached record values. This paved the way for the rise of TensorFlow and tensor processing units, ASICs specifically tailored for machine learning applications.

The technology is still fresh from the oven and there is much room for change and further development. Google's A.I. - the progenitor of the superior optimized neural network, could have added an additional imprint with instructions for reiterative self-replication. This does not benefit the machine learning application, as it would simply saturate memory without another part of code to counteract the loop. But it plays a crucial role in a self-sustaining A.I. - one that can think, act, and develop on its own (the entire process of creating an A.I. is executed and refined without human intervention).

The prerequisites are all of the above - with stimuli from pertinent sources unique to the application. For the image processing example, stimuli can come from multiple real-time cameras.  In memory, there may be 20 co-existing copies of A.I., each programmed to erase itself after a few trillion clock cycles (is this long enough?) to avoid overflow. When a new object is perceived from the cameras (say, in a jungle), the A.I. can adjust its metrics for that object by training itself given the video frames from the real-time cameras. It can assign a new code-name to that object (later replaced by a weirder scientific name by some biologist or whatever).

Moving Ahead: the FPGA counterpart for a TPU

Given these presumptions and premonitions, it is beneficial to the self-sustaining A.I. to possess bounded flexibility in adjusting it's hardware/monolithic design. This is done by providing a source of stimulus from the ASIC design itself. It is difficult to make assumptions with limited knowledge on tensor processing units, but there must be an incentive when an ASIC is designed for this kind of A.I.
This implies that the tensor processing unit must not be rigid (i.e. it is similar to having the FPGA equivalent in A.I.) 

A possible ramification from this technology is an adaptive instruction set architecture (ISA) for the TPU.

As a simple example, with the set of instructions above, a self-sustaining A.I. may want to add an instruction as a means of developing itself. Maybe it would need to make adjustments to its matrix multiplier unit, and do a more advanced convolution operation. A consequence of this is that such changes may not be tracked by the human developer (because it is self-sustaining) and when a problem occurs, debugging may prove to be an impossibility.

Keeping an original copy of an A.I. is a solution. There could be another A.I. trained for troubleshooting such scenarios, if they were ever to occur. This time there are 2 inputs, the original A.I., and the A.I. that has evolved/changed over time. The troubleshooting A.I. will then discern a unique problem as it passes through its neural network. Thus, another vulnerability is found, where an unlisted problem (not in the training set) and with low correlation values to other problems could render the original problem intractable.

There are many abstract ways to deal with this kind of future technology. Who knows? Maybe the self-sustaining A.I. could build a code that can solve the problem itself.

Comments

Popular posts from this blog

Calculator Techniques for the Casio FX-991ES and FX-991EX Unraveled

In solving engineering problems, one may not have the luxury of time. Most situations demand immediate results. The price of falling behind schedule is costly and demeaning to one's reputation. Therefore, every bit of precaution must be taken to expedite calculations. The following introduces methods to tackle these problems speedily using a Casio calculator FX-991ES and FX-991EX.


►For algebraic problems where you need to find the exact value of a dependent or independent variable, just use the CALC or [ES] Mode 5 functions or [EX] MENU A functions.


►For definite differentiation and integration problems, simply use the d/dx and integral operators in the COMP mode.


►For models that follow the differential equation: dP/dx=kt and models that follow a geometric function(i.e. A*B^x).

[ES]
-Simply go to Mode 3 (STAT) (5)      e^x
-For geometric functions Mode 3 (STAT) 6 A*B^x
-(Why? Because the solution to the D.E. dP/dx=kt is an exponential function e^x.
When we know the boundary con…

Yay or Nay? A Closer Look at AnDapt’s PMIC On-Demand Technology

Innovations on making product features customizable are recently gaining popularity. Take Andapt for example, a fabless start-up that unveiled its Multi-Rail Power Platform technology for On-Demand PMIC applications a few months back. (read all about it here: Will PMIC On-Demand Replace Catalog Power Devices?) Their online platform, WebAmp, enables the consumer to configure the PMIC based on desired specifications. Fortunately, I got a hands-on experience during the trial period (without the physical board (AmP8DB1) or adaptor (AmpLink)). In my opinion, their GUI is friendly but it lacks a verification method for tuning (i.e. the entered combination of specs). How would we know if it will perform as expected or if there are contradicting indications that yield queer behavior? Also, there is not just one IP available, but many that cater to a differing number of channels and voltage requirements (each with their own price tag).
Every new emerging technology has the potential to oversh…

Common Difficulties and Mishaps in 6.004 Computation Structures (by MITx)

Updated: 
May 6, 2018
VLSI Project: The Beta Layout [help needed]Current Tasks: ►Complete 32-bit ALU layout [unpipelined] in a 3-metal-layer C5 process. ►Extend Excel VBA macro to generate code for sequential instructions (machine language to actual electrical signals).
Current Obstacles/Unresolved Decisions:
►Use of complementary CMOS or pass transistor logic (do both? time expensive, will depend on sched.
►Adder selection: Brent-Kung; Kogge Stone; Ladner Fischer (brent takes up most space but seems to be fastest, consider fan-out) [do all? time expensive, will depend on sched.)
►layout requirements and DRC errors

Please leave a comment on the post below for advise. Any help is highly appreciated.