Search⌘ K
AI Features

AI Landscape

Explore the broad AI landscape and understand its core subfields such as machine learning, deep learning, natural language processing, speech recognition and synthesis, computer vision, robotics, and generative AI. This lesson helps you grasp how these areas connect and apply to real-world tasks, enhancing your foundational AI knowledge.

AI is not ML

AI and ML are often used interchangeably. However, this is not exactly true because these two fields are unequal. ML is a subfield of AI that focuses on developing algorithms and statistical models that enable computers to learn and make data-based decisions. In contrast, AI focuses on creating intelligent systems through reasoning, learning, problem-solving, perception, and language understanding. ML systems improve their performance on a given task with experience, i.e., as they are exposed to more data, they learn and refine their models. The goals of both fields are different, too: AI focuses on mimicking human intelligence, while ML develops models that can make accurate predictions, often with the help of data.

1.

We can make AI equivalent to ML by employing ML models to solve our tasks. Is this accurate?

Show Answer
Did you find this helpful?

AI landscape

Let’s explore the AI landscape and how it is connected to the other subfields of computer science.

Machine learning

Machine learning is a subset of AI that enables systems to learn from data and improve their performance over time without being explicitly programmed. It includes supervised learning, unsupervised learning, reinforcement learning, and semi-supervised learning.

Let's look at an example of how supervised learning helps predict exam scores based on the number of hours studied using linear regression. In this example, we use a simple linear regression model to predict a student's score based on how many hours they studied. Instead of a human programmer writing a specific rule for the grade, the machine learns the rule itself by looking at examples.

Python 3.10.4
######### Machine Learning Application #########
def linear_regression_exam_score_prediction():
# Import Necessary Libraries
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
# Create a Dataset
X = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10]).reshape(-1, 1)
y = np.array([10, 20, 30, 40, 50, 60, 70, 80, 90, 100])
plt.figure(figsize=(15,10))
# Visualize the Entire Dataset
plt.scatter(X, y, color='blue', label='All Data')
plt.title('Hours Studied vs. Score')
plt.xlabel('Hours Studied')
plt.ylabel('Score')
plt.legend()
plt.savefig("output/original.png")
plt.show()
# Split the Dataset into Training and Testing Sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# Train the Linear Regression Model
model = LinearRegression()
model.fit(X_train, y_train)
# Make Predictions
y_pred = model.predict(X_test)
# Visualize the Training, Testing, and Predicted Results
plt.scatter(X_train, y_train, color='green', label='Training Data')
plt.scatter(X_test, y_test, color='red', label='Testing Data')
plt.plot(X_test, y_pred, color='blue', linestyle='--', label='Predicted')
plt.title('Actual vs. Predicted Scores')
plt.xlabel('Hours Studied')
plt.ylabel('Score')
plt.legend()
plt.savefig("output/predicted.png")
plt.show()
# Evaluate the Model
mse = mean_squared_error(y_test, y_pred)
print(f'Mean Squared Error: {mse}')
# Calling the above function
linear_regression_exam_score_prediction()

How the code works:

  • Lines 4–8: We import ...