2 CAP 6635 Artificial Intelligence Homework 1 [12 pts] If you have multiple pictures, please include all pictures in one Word/pdf file. 1. [1 pt] What is PEAS task environment description for...

2 answer below »
The assignment is for Artificial Intelligence


2 CAP 6635 Artificial Intelligence Homework 1 [12 pts] If you have multiple pictures, please include all pictures in one Word/pdf file. 1. [1 pt] What is PEAS task environment description for intelligent agent? For the following agents, develop a PEAS description of their task environment · Assembling line part-picking robot · Robot soccer player 2. [1 pt] Figure 1 shows the relationship between an AI agent and its environment. Please explain the names and functionalities of the parts marked as A, B, C, and D, respectively. Figure 1 Figure 2 3. [1 pt] Please design pseudo-code of an energy efficient model-based vacuum-cleaner agent as follows: (1) The environment has three locations (A, B, C as shown in Figure 2) and three states (“Clean”, “Dirty”, “Unknown”). (2) The vacuum-cleaner has four actions “Left”, “Right”, “Suck”, and “Idle”, (3) The vacuum-cleaner will clean the dirt as soon as it senses that the current environment is Dirty, (4) If the vacuum-cleaner senses the current environment is “Clean” or “Unknown”, it will remain Idle for one time point, and (5) The vacuum-cleaner will change location if it senses the current environment is “Clean” for two consecutive time points in a row, and (6) The vacuum-cleaner will change location if it senses the current environment is “Unknown” for two consecutive time points in a row. (7) When change locations, agent can only move “right” if it is at location “A”, move “left” if it is at location “C”, and randomly decide to move “left” or “right” if it is location “B” Summarize percept sequences and corresponding actions as a table [0.5 pt], Write the pseudo code of the agent [0.5 pt]. 4. [1 pt] Please summarize task environment types for the following three agents, in terms of “observable”, “deterministic”, “episodic”, “static”, “discrete”, “number of agents” Agent observable deterministic episodic static discrete # of agents Tic-tac-toe Tetris Robot soccer competition Texas hold 'em Tic-tac-toe Tetris Texas hold’em Robot soccer competition 5. [1 pt] The goal of the 4-queens problem is to place 4 queens on a 4 rows 4 columns chessboard, such that no two queens attach each other. One solution to the 4-queens problem is shown in Figure 3. Please design a search based solution to solve the 4-queens problem. Your solution must include following four components a. Define state [0.25] b. Define successor function, and calculate total number of states, based on the defined successor function [0.25 pt] c. Show state graph [0.25 pt] d. Show how to find solutions from the state graph [0.25 pt] Figure 3 Figure 4: Eight-puzzle game: an initial state (left), and a portion of a search tree (right). 6. [1 pt] Figure 4 shows an initial state of the eight-puzzle game. Please create a search tree using this initial state. The search tree must have depth 2 (Figure 4 currently shows a search tree with depth 1), and shows all search nodes (including states corresponding to each search node. Figure 5 7. [1 pt] Figure 5 shows a search tree with 21 nodes. Please uses Breath First Search (BFS) algorithm showing in Figure 6 to report the order of the nodes being visited, and the Fringe structure (report fringe structure of each step as show in the table below) [0.5 pt]. Please compare the fringe structure and explain the memory consumption of each method, with respect to branching factor (b) and the search depth (d) [0.5 pt]. Fringe Node visited Fringe Size Search depth Figure 6: Breath First Search (BFS) Figure 7 Depth First Search DFS 8. [1 pt] Figure 5 shows a search tree with 21 nodes. Please uses Depth First Search algorithms showing in Figures 7 to report the order of the nodes being visited, and the Fringe structure of each method (report fringe structure of each step as show in the table below) [0.5 pt]. Please compare the fringe structure and explain the memory consumption of each method, with respect to branching factor (b) and the search depth (d) [0.5 pt]. DFS Method Fringe Node visited Fringe Size Search depth Figure 8 Robot navigation field. 9. [1 pt] Figure 8 shows a robot navigation field, where the red square (d2) is the robot, and green square (c7) is the goal. The shad squares (such as b2, c2, etc.) are obstacles. The robot is not allowed to move in diagonal line. Node are coded using an alphabet letter followed by a digit (such as a0, b1, b2 etc.). When two sibling nodes are inserted into fringe (queue), use deque order to favor node with a lower alphabet and a lower digit. For example, if d1 and e2 are sibling nodes, d1 will be dequeued first (because “d” has a lower alphabetic order than “e”). If a1 and a2 are sibling nodes, a1 will be dequeued first (because “1” has a lower digit than “2”). Node expanded/visited does not need to be revisited. · Use Depth First Search to find path from d2 to c7. · Report nodes in the fringe in the orders they are included in the fringe. [0.25 pt] · Report the order of the nodes being expanded. [0.5 pt] · Report the final path from d2 to c7. [0.25 pt] · Use Breadth First Search to find path from d2 to c7. · Report nodes in the fringe in the orders they are included in the fringe. [0.25 pt] · Report the order of the nodes being expanded. [0.5 pt] · Report the final path from d2 to c7. [0.25 pt] Fringe: Node visited/expanded For all programming tasks, please submit the Notebook as html or pdf files for grading (your submission must include scrips/code and the results of the script). For each subtask, please use task description (requirement) as comments, and report your coding and results in following format: 10. [1 pt] The “Source Code” module in Canvas lists three python programs. “randomagentrobot.py” is a random moving agent, “simpleagentrobot.py” is a reflex agent with a model to ensure agent walking through all locations. “complexagentrobot.py” is a goal-based agent aiming to clean all dirty spots with minimum steps of moves. a. Use proper python IDE environment (FAU HPC environment will allow students to use Jupyter Notebook for python coding) to run each of the three python programs. Change environment as a 6x6 navigation board (0.25pt). Capture one screenshot (or plot) showing that the program is properly running (0.25 pt). b. Explain which agent performs the best, and why (0.25 pt). c. Propose a solution to design a fourth agent program which may outperform “complexagenetrobot.py”. Explain why your proposed solution can perform better (0.25 pt). (no need to implement the agent. Just description your idea using textual descriptions or pseudo-code). 11. [2 pts] The “Blind Search to Play Maze [Notebook, html]” posted on Canvas “Source Code” shows a Maze game using DFS and BFS search (using “DFS” or “BFS” parameters). Use Notebook as the skeleton code, validate and compare following settings and results. a. Using Figure 9 as the game field, and use any two states as initial and goal states, report a screenshot of the program. Explaining the meaning of the output [0.25 pt] b. Set initial state as [0, 0] and goal state as [0, 1]. Report number of nodes expanded by BFS and DFS, respectively. Report final path discovered by each algorithm, respectively [0.25 pt]. Explain why or why not the method is optimal [0.25 pt], and why one approach expands far more nodes than the other method [0.25 pt] c. Set initial state as [0, 0] and goal state as [0, 1], [0, 2], [0, 3], [0, 5], [0, 6], [0, 7], [0, 8], [0, 9], respectively. Create one plot to show the number of maximum Fringe size of BFS (x-axis denotes the goal state, and y-axis denotes the maximum Fringe size) [0.25 pt]. Create one plot to show the number of maximum Fringe size of DFS (x-axis denotes the goal state, and y-axis denotes the maximum Fringe size) [0.25 pt]. Explain how the Fringe size grows with respect to the search depth for DFS and BFS, respectively. [0.25 pt] d. For goal state [0, 9], compare path returned by BFS and DFS, respectively. Which method is optimal, and which method is not optimal? Why? [0.25 pt] Figure 9: Maze game field
Answered 4 days AfterFeb 02, 2022

Answer To: 2 CAP 6635 Artificial Intelligence Homework 1 [12 pts] If you have multiple pictures, please include...

Chirag answered on Feb 04 2022
130 Votes
1. Agent : It takes input from the environment through sensors, allows to perform the operation and also gives the output through the actuators.Agent solves the problem which is given by Performance Measure, Environment, Actuators and Sensors ( PEAS ).
Description of PEAS task environment which can design assembly line picking robot is given as follows:
1. Performance measure : Percentage of parts in correct bins is a measurement for performance.
2. Environment : It’s a conveyor belt with parts and bins
3. Actuators : They are jointed arm and hand
4. Sensors: These are camera and joint angle sensors.
Description of PEAS task environment which can design Robot to play and win the soccer game are:
1. Performance Measure : Measuring th
e performance of count of attempted goals, goals that are saved, wrongly kicked balls
2. Environment : It can be soccer field, arc, defense arm, attack area, middle area, penalty area, referee robot, weather conditions and audience, ball, robots of same team, robots of opponent team.
3. Actuator: They are robot’s legs, arms, head
4. Sensors: They are camera, communications and sensors.
2. A. Percepts – These are considered as input that an agent is perceiving at given moment. It’s detected by sensor and the action of the agent at an instant time depends on the entire percept.
B. Actions – These are the ones that is carried out by agents based on percepts.
C. Actuators – These are the ones that controls the movements or agent motions.
D. Sensors – Detection of changes in the environment.
3. Percept Sequence Table:

Pseudo Code:
Pseudo code
function vaccum_cleaner([location, status, idle_count]) returns an action
if status == dirty then return suck
else
if status and idle_count < 2 == clean then
return idle
else
return switch location
idle = 0
Vacum_cleaner v
action = vaccum_cleaner([v.location, v.status, idle])
if action == switch location
switch location A to location B
idle = 0
else
do the action
4.Following are the answers:
Tic tac toe – deterministic
Tetris – episodic
Robot Soccer competition - # of agents
Texas hold’em – discrete
5. Problem here is to place 4 queens on chessboard so that no two queens are in same row, column or diagonal.
Fromulation of Problem involves the representation of states, selecting its initial state, operators and successor states. Different ways:
Problem Formulation 1
States : Arrangement of 0 to 8 on board
Initial State : 0 queens
Successor function : Adding queen in any square of the chessboard
Goal test : 4 queens on board without getting attacked
Initial state has 16 successors. Next level , it will have 15 successors and so on. Restricting the search tree by considering the successors where there is no attack on queens. Check the queens with rest ones and depth will be 4.
Formulation 2
State : Arrangement of 4 queens on board
Initial State : All the queen at column board of 1
Successor : By changing the position of any of the queen
Goal : 4 queens on board but not attacked
Formulation 3
State : Arrangement of k queens in first k rows with no attack on queens
Initial state :0 queens
Successor: adding queen to k+1 th row with no attack on queens
Goal : 4 non attack queens
6. Search tree is given as:
7. Search tree with 21 nodes
8. BFS and DFS Memory consumption:
BFS uses more memory than DFS. BFS uses O(branching factor ^ max depth) were as DFS used O(max depth) were O notation indicates space used in memory
Branching factor is 4, search depth is 3 so BFS use more memory than the other as at each level of execution in DFS, there will be not more than 3 nodes where in BFS from node F to U will be in memory.
Table as shown above
    Fringe
    node visited
    fringe size
    search depth
    0
    node A
    0
    1
    node C ,D,E
    B
    3
    2
    node G,H,E
    F
    3
    3
9. Following is the diagram for the implementation
Here the code for the implementation is as follows:
    
    """
    
    Best-First Searching
    
    @author: huiming zhou
    
    """
    
    
    
    import os
    
    import sys
    
    import math
    
    import heapq
    
    
    
    sys.path.append(os.path.dirname(os.path.abspath(__file__)) +
    
     "/../../Search_based_Planning/")
    
    
    
    from Search_2D import plotting, env
    
    from Search_2D.Astar import AStar
    
    
    
    
    
    class BestFirst(AStar):
    
     """BestFirst set the heuristics as the priority
    
     """
    
     def searching(self):
    
     """
    
     Breadth-first Searching.
    
     :return: path, visited order
    
     """
    
    
    
     self.PARENT[self.s_start] = self.s_start
    
     self.g[self.s_start] = 0
    
     self.g[self.s_goal] = math.inf
    
     heapq.heappush(self.OPEN,
    
     (self.heuristic(self.s_start), self.s_start))
    
    
    
     while self.OPEN:
    
     _, s = heapq.heappop(self.OPEN)
    
     self.CLOSED.append(s)
    
    
    
     if s == self.s_goal:
    
     break
    
    
    
     for s_n in self.get_neighbor(s):
    
     new_cost = self.g[s] + self.cost(s, s_n)
    
    
    
     if s_n not in self.g:
    
     self.g[s_n] = math.inf
    
    
    
     if new_cost < self.g[s_n]: # conditions for updating Cost
    
     self.g[s_n] = new_cost
    
     self.PARENT[s_n] = s
    
    
    
     # best first set the heuristics as the priority
    
     heapq.heappush(self.OPEN, (self.heuristic(s_n), s_n))
    
    
    
     return self.extract_path(self.PARENT), self.CLOSED
    
    
    
    
    
    def main():
    
     s_start = (5, 5)
    
     s_goal = (45, 25)
    
    
    
     BF = BestFirst(s_start, s_goal, 'euclidean')
    
     plot = plotting.Plotting(s_start, s_goal)
    
    
    
     path, visited = BF.searching()
    
     plot.animation(path, visited, "Best-first Searching") # animation
    
    
    
    
    
    if __name__ == '__main__':
    
     main()
We will be using the best search algorithm here
Best First Search movement of robot (d​​​​​2 - c7)
10. A, Complex Agent:
Simple agent:
Random agent:
b. Complex Agent performs the best as it tries out all the spots.
c. Fourth agent can be complex with random state while performing action as it is rule based and also it will capture all the spots randomly.
simple-agent-robot.py
import matplotlib.pyplot as plt
import random
# 0 -> clean
# 1 -> wall
# 2 -> dirt
matrix = [
[1, 1, 1, 1, 1, 1],
[1, 0, 0, 0, 0, 1],
[1, 0, 0, 0, 0, 1],
[1, 0, 0, 0, 0, 1],
[1, 0, 0, 0, 0, 1],
[1, 1, 1, 1, 1, 1]
]
# Actions Matrix -> represents the action for each position
# Actions = up (0), down (1), left (2), right (3), clean(4), end (5)
actionsMatrix = [
[9, 9, 9, 9, 9, 9],
[9, 1, 3, 1, 5, 9],
[9, 1, 0, 1, 0, 9],
[9, 1, 0, 1, 0, 9],
[9, 3, 0, 3, 0, 9],
[9, 9, 9, 9, 9, 9]
]
# The robot always starts at matrix[1][1]
currLine = 1
currCol = 1
def renderMatrix(matrix):
plt.imshow(matrix, 'pink')
plt.show(block=False)
plt.plot(currCol, currLine, '*r', 'LineWidth', 5)
plt.pause(0.5)
plt.clf()
def createWorld(m):
for mI in range(1, 5):
for aI in range(1, 5):
number = random.randint(0, 3)
m[mI][aI] = 2 if number == 1 else 0
renderMatrix(matrix)
def findNextAction(x, y):
return actionsMatrix[x][y]
# decides which action will...
SOLUTION.PDF

Answer To This Question Is Available To Download

Related Questions & Answers

More Questions »

Submit New Assignment

Copy and Paste Your Assignment Here