Answer To: 2 CAP 6635 Artificial Intelligence Homework 1 [12 pts] If you have multiple pictures, please include...
Chirag answered on Feb 04 2022
1. Agent : It takes input from the environment through sensors, allows to perform the operation and also gives the output through the actuators.Agent solves the problem which is given by Performance Measure, Environment, Actuators and Sensors ( PEAS ).
Description of PEAS task environment which can design assembly line picking robot is given as follows:
1. Performance measure : Percentage of parts in correct bins is a measurement for performance.
2. Environment : It’s a conveyor belt with parts and bins
3. Actuators : They are jointed arm and hand
4. Sensors: These are camera and joint angle sensors.
Description of PEAS task environment which can design Robot to play and win the soccer game are:
1. Performance Measure : Measuring the performance of count of attempted goals, goals that are saved, wrongly kicked balls
2. Environment : It can be soccer field, arc, defense arm, attack area, middle area, penalty area, referee robot, weather conditions and audience, ball, robots of same team, robots of opponent team.
3. Actuator: They are robot’s legs, arms, head
4. Sensors: They are camera, communications and sensors.
2. A. Percepts – These are considered as input that an agent is perceiving at given moment. It’s detected by sensor and the action of the agent at an instant time depends on the entire percept.
B. Actions – These are the ones that is carried out by agents based on percepts.
C. Actuators – These are the ones that controls the movements or agent motions.
D. Sensors – Detection of changes in the environment.
3. Percept Sequence Table:
Pseudo Code:
Pseudo code
function vaccum_cleaner([location, status, idle_count]) returns an action
if status == dirty then return suck
else
if status and idle_count < 2 == clean then
return idle
else
return switch location
idle = 0
Vacum_cleaner v
action = vaccum_cleaner([v.location, v.status, idle])
if action == switch location
switch location A to location B
idle = 0
else
do the action
4.Following are the answers:
Tic tac toe – deterministic
Tetris – episodic
Robot Soccer competition - # of agents
Texas hold’em – discrete
5. Problem here is to place 4 queens on chessboard so that no two queens are in same row, column or diagonal.
Fromulation of Problem involves the representation of states, selecting its initial state, operators and successor states. Different ways:
Problem Formulation 1
States : Arrangement of 0 to 8 on board
Initial State : 0 queens
Successor function : Adding queen in any square of the chessboard
Goal test : 4 queens on board without getting attacked
Initial state has 16 successors. Next level , it will have 15 successors and so on. Restricting the search tree by considering the successors where there is no attack on queens. Check the queens with rest ones and depth will be 4.
Formulation 2
State : Arrangement of 4 queens on board
Initial State : All the queen at column board of 1
Successor : By changing the position of any of the queen
Goal : 4 queens on board but not attacked
Formulation 3
State : Arrangement of k queens in first k rows with no attack on queens
Initial state :0 queens
Successor: adding queen to k+1 th row with no attack on queens
Goal : 4 non attack queens
6. Search tree is given as:
7. Search tree with 21 nodes
8. BFS and DFS Memory consumption:
BFS uses more memory than DFS. BFS uses O(branching factor ^ max depth) were as DFS used O(max depth) were O notation indicates space used in memory
Branching factor is 4, search depth is 3 so BFS use more memory than the other as at each level of execution in DFS, there will be not more than 3 nodes where in BFS from node F to U will be in memory.
Table as shown above
Fringe
node visited
fringe size
search depth
0
node A
0
1
node C ,D,E
B
3
2
node G,H,E
F
3
3
9. Following is the diagram for the implementation
Here the code for the implementation is as follows:
"""
Best-First Searching
@author: huiming zhou
"""
import os
import sys
import math
import heapq
sys.path.append(os.path.dirname(os.path.abspath(__file__)) +
"/../../Search_based_Planning/")
from Search_2D import plotting, env
from Search_2D.Astar import AStar
class BestFirst(AStar):
"""BestFirst set the heuristics as the priority
"""
def searching(self):
"""
Breadth-first Searching.
:return: path, visited order
"""
self.PARENT[self.s_start] = self.s_start
self.g[self.s_start] = 0
self.g[self.s_goal] = math.inf
heapq.heappush(self.OPEN,
(self.heuristic(self.s_start), self.s_start))
while self.OPEN:
_, s = heapq.heappop(self.OPEN)
self.CLOSED.append(s)
if s == self.s_goal:
break
for s_n in self.get_neighbor(s):
new_cost = self.g[s] + self.cost(s, s_n)
if s_n not in self.g:
self.g[s_n] = math.inf
if new_cost < self.g[s_n]: # conditions for updating Cost
self.g[s_n] = new_cost
self.PARENT[s_n] = s
# best first set the heuristics as the priority
heapq.heappush(self.OPEN, (self.heuristic(s_n), s_n))
return self.extract_path(self.PARENT), self.CLOSED
def main():
s_start = (5, 5)
s_goal = (45, 25)
BF = BestFirst(s_start, s_goal, 'euclidean')
plot = plotting.Plotting(s_start, s_goal)
path, visited = BF.searching()
plot.animation(path, visited, "Best-first Searching") # animation
if __name__ == '__main__':
main()
We will be using the best search algorithm here
Best First Search movement of robot (d2 - c7)
10. A, Complex Agent:
Simple agent:
Random agent:
b. Complex Agent performs the best as it tries out all the spots.
c. Fourth agent can be complex with random state while performing action as it is rule based and also it will capture all the spots randomly.
simple-agent-robot.py
import matplotlib.pyplot as plt
import random
# 0 -> clean
# 1 -> wall
# 2 -> dirt
matrix = [
[1, 1, 1, 1, 1, 1],
[1, 0, 0, 0, 0, 1],
[1, 0, 0, 0, 0, 1],
[1, 0, 0, 0, 0, 1],
[1, 0, 0, 0, 0, 1],
[1, 1, 1, 1, 1, 1]
]
# Actions Matrix -> represents the action for each position
# Actions = up (0), down (1), left (2), right (3), clean(4), end (5)
actionsMatrix = [
[9, 9, 9, 9, 9, 9],
[9, 1, 3, 1, 5, 9],
[9, 1, 0, 1, 0, 9],
[9, 1, 0, 1, 0, 9],
[9, 3, 0, 3, 0, 9],
[9, 9, 9, 9, 9, 9]
]
# The robot always starts at matrix[1][1]
currLine = 1
currCol = 1
def renderMatrix(matrix):
plt.imshow(matrix, 'pink')
plt.show(block=False)
plt.plot(currCol, currLine, '*r', 'LineWidth', 5)
plt.pause(0.5)
plt.clf()
def createWorld(m):
for mI in range(1, 5):
for aI in range(1, 5):
number = random.randint(0, 3)
m[mI][aI] = 2 if number == 1 else 0
renderMatrix(matrix)
def findNextAction(x, y):
return actionsMatrix[x][y]
# decides which action will...