Article

AI-ML-a-journey-of-innovation-transformation

Artificial Intelligence (AI) and Machine Learning: A Journey of Innovation and Transformation

Artificial Intelligence (AI) and Machine Learning (ML) are the terms that are so common now, which even technology-agnostic people are also aware of.  These technologies already have started revolutionising industries, enhancing the capabilities of humans, thereby bringing in a lot of changes in society and livelihood. For a long time now, humans have been desperately trying multiple ways of feeding intelligence to machines and replicating human intelligence in machines. After decades of research, we are very close to achieving this.     

The word AI, Artificial intelligence is not so new. The term officially came into existence during the 1956 Dartmouth Workshop, though pioneers like Alan Turing theorised the concept of building machines which has the capability of simulating human intelligence even before that. He did this through the “Turing Test”, which states that any machine could be considered intelligent if its responses were indistinguishable from a human's. The aspirations of such pioneers were so optimistic, and they believed that machines performing intellectual tasks like humans were just around the corner. However, the complexity and challenges associated with replicating human cognitive processes made the progress very slow.

Throughout the history of AI, it has gone through multiple eras of research, with each era, a bigger milestone being achieved. 

It all started with “Expert Systems”, which laid the foundation of creating practical applications which leverages AI.  During the 1960’s and 1970’s this was the first step to development of computer programs that mimicked human expertise in specific domains, from all the previous theoretical discussions. The idea was to create softwares that could emulate expert decisions, through the knowledge built into the system by capturing the knowledge and problem-solving abilities of human experts. This provided non-experts to access expert-level insights and recommendations.

The Expert System was built with 3 major components, Knowledge base, which stored the domain-specific information, rules, facts, and heuristics. This vast repository of data was collected from human experts through interviews and documentation.  Then comes the Inference Engine, which plays the role of performing logical operations and finding the information in the knowledge base. This operated similarly to a human brain and finally, the User Interface allowed users to interact with the system, input data, and receive recommendations or solutions based on the expert knowledge stored in the system. Expert Systems found its usage in Medical Diagnosis, where systems like MYCIN (developed at Stanford) could assist doctors in diagnosing infectious diseases by analysing patient symptoms and recommending treatments. It was also used in the Financial sector where DENDRAL focused on chemical analysis and the identification of molecular structures, while expert systems were also employed in financial analysis and investment strategies. This approach had few challenges like Acquiring expert knowledge, as it was a complex and time-consuming process, often requiring extensive interviews and interactions with human experts. Also maintaining and updating the data with human expert knowledge required to do the entire process again, which was a blockade for proper maintenance. Lastly, the Expert Systems required a lot of resources to work and the scalability of such solutions was not practically possible.

The second era of AI was during the period of 1980s through the 1990s. There was a transformative shift in the field of artificial intelligence (AI) with the emergence of machine learning. This pivotal era laid the groundwork for the application of algorithms that enabled computers to learn from data and make predictions – a concept that would revolutionise a wide array of industries and set the stage for modern AI advancements. Machine learning, a subfield of AI, began to gain prominence as researchers shifted their focus from rule-based AI systems to developing algorithms that could improve their performance through learning from data. This era paved the way for multiple advancements in the area of AI, through Algorithm Development, Computational Power, Pattern Recognition, Natural Language Processing, Data Mining, Robotics, and Recommendation Systems.

These advancements created a few challenges as well like, Data Availability, since labelled data were required for training machine learning models. Also, Computational Constraints were a big problem during that time, and the facilities during that period made training complex models a time-consuming task. Adding to these the Complexity of the Algorithm, required expertise to achieve optimal performance, which the non-experts were not able to work.

The principles and techniques developed during this era continue to shape the AI landscape today. Machine learning has evolved into deep learning, with neural networks and deep architectures becoming the foundation of many AI applications. The democratisation of machine learning through open-source libraries and cloud computing has further expanded its reach, making it accessible to a broader range of industries and practitioners.   

We generally call the late 1980s and early 1990s period, an AI Winter. The initial boom and enthusiasm of AI, had set few unrealistic expectations and the delay in reaching them, diminished the funding for AI research and development. But after a brief period, the AI Community bounced back in the late 1990s and early 2000s by making several key developments like Machine Learning Advances, Practical Applications, Robust Algorithms, Increased Computing Power, and Commercial Interest. This resurgence of AI in the late 1990s and early 2000s laid the groundwork for many of the AI breakthroughs seen today. It set the stage for the development of modern AI technologies, including machine learning, deep learning, natural language processing, and computer vision, which are now driving transformative changes across industries such as healthcare, finance, and transportation.

The current era of AI that we are in from the 2010s, has unleashed the full potential of AI. We call it the "Deep Learning Revolution", where neural networks, particularly deep neural networks, allowed machines to perform tasks previously considered beyond their capabilities.

Deep learning was capable mainly because of Big Data Availability, High Computational Power through GPUs (Graphics Processing Units), and Improved Algorithms. The Deep Learning Revolution reshaped AI and opened doors to create revolutionary applications concepts like Computer Vision which surpass human performance in tasks like image classification and object detection which are the base of applications in autonomous vehicles, medical imaging, and more. Natural Language Processing (NLP) which has made major breakthroughs in machine translation, sentiment analysis, chatbots, and the development of virtual personal assistants like Siri and Alexa. Deep learning plays a pivotal role in autonomous vehicles, robotics, and drones by enhancing perception and decision-making capabilities. They were also used to diagnose diseases from medical images, predict patient outcomes, and assist in drug discovery.

There are many different types of Artificial Intelligence (AI) and Machine Learning (ML), which apply to various problem-solving scenarios.  Artificial Intelligence (AI), is majorly classified into Weak AI, mainly designed for specific tasks that operate within predefined parameters, like Siri and Alexa (Virtual Personal Assistants). Strong AI, which is mostly theoretical and has not yet been realised,  possesses human-like cognitive abilities, enabling it to understand, learn, and perform any intellectual task that a human can. Then there is Artificial Narrow Intelligence (ANI) which refers to AI systems that excel at specific tasks, such as image recognition or playing chess.

Similarly, Machine Learning is also classified as, Supervised Learning: In supervised learning, models are trained on labelled data, where the input data is paired with corresponding target labels or outcomes. The goal is to learn a mapping function from inputs to outputs, making it suitable for tasks like classification and regression. Unsupervised Learning: Unsupervised learning involves training models on unlabeled data, intending to discover hidden patterns, structures, or relationships within the data. Clustering and dimensionality reduction are common tasks in unsupervised learning. Semi-Supervised Learning: This approach combines elements of both supervised and unsupervised learning. It leverages a small amount of labelled data along with a larger pool of unlabelled data to improve model performance.

Reinforcement Learning:

Reinforcement learning is used for training agents to make sequences of decisions by interacting with an environment. Agents receive feedback in the form of rewards or penalties based on their actions. It's widely applied in robotics, game-playing, and autonomous systems. Deep Learning: Deep learning is a subset of machine learning that involves neural networks with multiple layers (deep neural networks). It's particularly effective for tasks like image recognition, natural language processing, and speech recognition.

These types of AI and ML cover a wide spectrum of capabilities and applications. Depending on the problem at hand and the availability of data, researchers and practitioners select the most suitable type or combination of types to achieve the desired results.The rate of growth concerning AI / ML in recent times is huge and it has started creating several job opportunities in the market.

Machine Learning Engineer:

The need for developing and deploying new ML models requires a lot of Machine Learning Engineers. They collaborate with stakeholders to define problems, collect and preprocess data, select appropriate models, and fine-tune hyperparameters. These professionals ensure models are deployed into production environments and monitor them for optimal performance. Strong programming skills, knowledge of machine learning libraries, and proficiency in data handling are essential for this role, along with effective collaboration and communication abilities.

Data Scientist:

Data Scientists play a critical role in organisations by collecting, cleaning, and analysing data to derive valuable insights. They perform exploratory data analysis, engineer features, and select appropriate machine learning models for predictive or analytical tasks. Data Scientists collaborate with cross-functional teams, communicate findings to non-technical stakeholders, and ensure data privacy and ethical practices. Their responsibilities also include model deployment, A/B testing, and staying up-to-date with the latest developments in data science. Strong analytical, programming, and statistical skills, along with domain expertise, are essential for this role.

AI Ethicist:

AI Ethicists are responsible for ensuring the ethical development and deployment of artificial intelligence and machine learning technologies. They establish ethical guidelines, review AI systems for bias and transparency, and mitigate ethical concerns. This role involves promoting fairness, accountability, and privacy compliance, engaging stakeholders, and educating teams on ethical AI practices. AI Ethicists play a critical role in safeguarding the responsible use of AI in organisations and advocating for ethical AI standards within the broader community.

NLP Engineers:

NLP Engineers specialise in processing and understanding human language data. They collect and preprocess text, engineer features, and develop NLP models for tasks like sentiment analysis, named entity recognition, and machine translation. These professionals choose appropriate NLP algorithms, fine-tune models, and evaluate their performance. They may also work on domain-specific language processing and multimodal NLP, deploying models into production and monitoring their ongoing accuracy. Expertise in NLP techniques, programming languages like Python, and NLP libraries is essential for this role, contributing to applications ranging from chatbots to language translation.

There are also a handful of other job opportunities like AI/ML Software Developer, AI Product Manager, Data Engineer, Computer Vision Engineer, Natural Language Processing (NLP) Engineer, Robotics Engineer, AI Consultant, Quantitative Analyst (Quant), Healthcare AI Specialist, AI Educator/Trainer, AI Entrepreneur/Startup Founder, AI Legal Expert and AI/ML Research Intern.

The journey from invention to innovation of AI & ML has been marked by periods of excitement, stagnation, and resurgence. The diversity of AI types, the complexity of machine learning approaches, and the multitude of challenges and opportunities they present have converged to create a landscape ripe for exploration and discovery. As these technologies continue to advance, their impact on industries, societies, and the very nature of human existence promises to be profound and revolutionary.

About the Author

Prabahar Murugan, a Techprenuer, with over a decade of experience in Technical and Managerial roles. Founded a Software Product Company at Madurai, called Amizhth Techno Solutions in 2020. Now the company has spread its wings across multiple products namely Varuvaai, a SaaS based E-commerce online store builder, e-mugavari, a SaaS based website builder, Kanimai, a SaaS based Educational Institute Management System and also into other disciplines of Design Studio, Training Academy and so on.

Add a comment & Rating

View Comments