DS 5010 Homework 4 DS 5010 Homework 4 Instructions • Submit your solutions on Canvas by the deadline displayed online. • Your submission must include a single Python module (file with extension “.py”)...

1 answer below »
see attach


DS 5010 Homework 4 DS 5010 Homework 4 Instructions • Submit your solutions on Canvas by the deadline displayed online. • Your submission must include a single Python module (file with extension “.py”) that includes all of the code necessary to answer the problems. All of your code should run without error. • Problem numbers must be clearly marked with code comments. Problems must appear in order, but later problems may use functions defined in earlier problems. • Functions must be documented with a docstring describing at least (1) the function’s purpose, (2) any and all parameters, (3) the return value. Code should be commented such that its function is clear. • You may use functions from built-in modules (e.g., math). You may NOT use external modules (e.g., numpy, pandas, etc.). In this assignment, you will implement a simple supervised learning classifier called the perceptron. The perceptron is the basis for neural networks, where it serves as an artificial neuron. You may use the code from “hw4-skeleton.py” on Piazza as a starting point. The Perceptron class is initialized with a single attribute weights, which is a list of weights that will be updated through training examples of input/output pairs. The __init__() method is already defined, and (optionally) allows the user to provide a list of initial weights. You can also find an example dataset called “sonar.csv” on Piazza for testing your classifier on real data. Problem 1 A perceptron predicts output by calculating the weighted sum of the input. If we have an input sample x = [1, x1, x2, ..., xm] and weights w = [w0, w1, ..., wm], then we calculate the activation as: activation = mÿ i=0 wixi (1) Then we predict the output using the following rule: f(x) = 1 if activation > 0, f(x) = 0 otherwise (2) The term “activation” refers to the metaphor of the perceptron as an artifical neuron. We classify the sample as positive (1) if the activation is over a certain threshold, and negative (0) otherwise. Note that we always prepend the input vector x = [1, x1, x2, ..., xm] with x0 = 1; the corresponding weight w0 is called the bias, and serves a role similar to the intercept in the equation for a line. As an example, consider an input x = [≠1, ≠1] and a perceptron with weights w = [≠5, 1, 1]. We would calculate the activation as: activation = (≠5)(1) + (1)(≠1) + (1)(≠1) = ≠7 (3) Since activation < 0, we would predict 0. define the method perceptron.predict1(self, x) satisfying the following criteria: 1 naseem al • takes parameter x which is a single input sample provided as a list or tuple • returns the predicted output (0 or 1) from the single input sample using the current weights • prints a message that the “model has not been trained yet” if the weights are none examples: in : x = [(-1, -1), (-5, -2.5), (-7.5, -7.5), (10, 7.5), (-2.5, 12.5), (5, 10), (5, 5)] in : y = [0, 0, 0, 1, 0, 1, 1] in : model = perceptron(weights=[-5, 1, 1]) in : model.predict1(x[0]) # correct out: 0 in : model.predict1(x[1]) # correct out: 0 in : model.predict1(x[2]) # correct out: 0 in : model.predict1(x[3]) # correct out: 1 in : model.predict1(x[4]) # incorrect! out: 1 problem 2 define the method perceptron.predict(self, x) satisfying the following criteria: • takes parameter x which is an iterable of multiple input samples (each being a list or tuple) • returns a list of all predicted outputs (0 or 1) from the input samples using the current weights hint: use your method from problem 1. examples: in : x = [(-1, -1), (-5, -2.5), (-7.5, -7.5), (10, 7.5), (-2.5, 12.5), (5, 10), (5, 5)] in : y = [0, 0, 0, 1, 0, 1, 1] in : model = perceptron(weights=[-5, 1, 1]) in : model.predict(x) # all correct but x[4] out: [0, 0, 0, 1, 1, 1, 1] 2 problem 3 the perceptron follows a simple “online” update rule that re-calculates the weights from each input/output example it receives during training. if we have an input sample x = [1, x1, x2, ..., xm] with known output y, then we calculate the predicted output ypred. if the prediction is correct (ypred = y), then we do nothing, and the weights are not updated. if the prediction is incorrect (ypred ”= y), then we update the weights w = [w0, w1, ..., wm] using the rule: wi,new = wi,old + (y ≠ ypred)xi (4) for all features i = 0, ..., m. this update shifts the weights (and therefore the decision boundary) toward correctly classifying x. to fully train the model, we simply apply this update rule to all input/output samples. usually, we will loop through the full dataset multiple times to iteratively improve the model further after each round of updates. define the method perceptron.update(self, x, y) satisfying the following criteria: • take parameter x which is a single input sample provided as a list or tuple • take parameter y which is a single output sample (0 or 1) • initializes the weights to a list of 0’s if they are still none • performs a single training update using the sample input/output pair examples: in : x = [(-1, -1), (-5, -2.5), (-7.5, -7.5), (10, 7.5), (-2.5, 12.5), (5, 10), (5, 5)] in : y = [0, 0, 0, 1, 0, 1, 1] in : model = perceptron(weights=[-5, 1, 1]) in : model.update(x[4], y[4]) in : print(model.weights) # updated weights [-6, 3.5, -11.5] in : model.predict1(x[4]) # x[4] predicted correctly now! out: 0 in : model.predict(x) # others are wrong now though out: [1, 1, 1, 0, 0, 0, 0] problem 4 define the method perceptron.fit(self, x, y, num_iter = 100) satisfying the following criteria: • takes parameter x which is an iterable of input samples (each being a list or tuple) • take parameter y which is an iterable of output samples (each 0 or 1) • (a) loops through all sample input/output pairs performing a training update for each pair • (b) repeats the training loop (a) for as many iterations as specified by num_iter. 3 hint: use your method from problem 3. examples: in : x = [(-1, -1), (-5, -2.5), (-7.5, -7.5), (10, 7.5), (-2.5, 12.5), (5, 10), (5, 5)] in : y = [0, 0, 0, 1, 0, 1, 1] in : model = perceptron() in : model.fit(x, y, num_iter=100) in : print(model.weights) # fitted weights [0, 12.5, -5.0] in : model.predict(x) # perfectly separable out: [0, 0, 0, 1, 0, 1, 1] problem 5 define the method perceptron.score(self, x, y) satisfying the following criteria: • takes parameter x which is an iterable of input samples (each being a list or tuple) • take parameter y which is an iterable of output samples (each 0 or 1) • returns the accuracy of predicting y from x using the current weights • accuracy is calculated as the proportion of correct predictions examples: in : x = [(-1, -1), (-5, -2.5), (-7.5, -7.5), (10, 7.5), (-2.5, 12.5), (5, 10), (5, 5)] in : y = [0, 0, 0, 1, 0, 1, 1] in : model = perceptron(weights=[-5, 1, 1]) in : model.score(x, y) out: 0.8571428571428571 4 instructions 0,="" we="" would="" predict="" 0.="" define="" the="" method="" perceptron.predict1(self,="" x)="" satisfying="" the="" following="" criteria:="" 1="" naseem="" al="" •="" takes="" parameter="" x="" which="" is="" a="" single="" input="" sample="" provided="" as="" a="" list="" or="" tuple="" •="" returns="" the="" predicted="" output="" (0="" or="" 1)="" from="" the="" single="" input="" sample="" using="" the="" current="" weights="" •="" prints="" a="" message="" that="" the="" “model="" has="" not="" been="" trained="" yet”="" if="" the="" weights="" are="" none="" examples:="" in="" :="" x="[(-1," -1),="" (-5,="" -2.5),="" (-7.5,="" -7.5),="" (10,="" 7.5),="" (-2.5,="" 12.5),="" (5,="" 10),="" (5,="" 5)]="" in="" :="" y="[0," 0,="" 0,="" 1,="" 0,="" 1,="" 1]="" in="" :="" model="Perceptron(weights=[-5," 1,="" 1])="" in="" :="" model.predict1(x[0])="" #="" correct="" out:="" 0="" in="" :="" model.predict1(x[1])="" #="" correct="" out:="" 0="" in="" :="" model.predict1(x[2])="" #="" correct="" out:="" 0="" in="" :="" model.predict1(x[3])="" #="" correct="" out:="" 1="" in="" :="" model.predict1(x[4])="" #="" incorrect!="" out:="" 1="" problem="" 2="" define="" the="" method="" perceptron.predict(self,="" x)="" satisfying="" the="" following="" criteria:="" •="" takes="" parameter="" x="" which="" is="" an="" iterable="" of="" multiple="" input="" samples="" (each="" being="" a="" list="" or="" tuple)="" •="" returns="" a="" list="" of="" all="" predicted="" outputs="" (0="" or="" 1)="" from="" the="" input="" samples="" using="" the="" current="" weights="" hint:="" use="" your="" method="" from="" problem="" 1.="" examples:="" in="" :="" x="[(-1," -1),="" (-5,="" -2.5),="" (-7.5,="" -7.5),="" (10,="" 7.5),="" (-2.5,="" 12.5),="" (5,="" 10),="" (5,="" 5)]="" in="" :="" y="[0," 0,="" 0,="" 1,="" 0,="" 1,="" 1]="" in="" :="" model="Perceptron(weights=[-5," 1,="" 1])="" in="" :="" model.predict(x)="" #="" all="" correct="" but="" x[4]="" out:="" [0,="" 0,="" 0,="" 1,="" 1,="" 1,="" 1]="" 2="" problem="" 3="" the="" perceptron="" follows="" a="" simple="" “online”="" update="" rule="" that="" re-calculates="" the="" weights="" from="" each="" input/output="" example="" it="" receives="" during="" training.="" if="" we="" have="" an="" input="" sample="" x="[1," x1,="" x2,="" ...,="" xm]="" with="" known="" output="" y,="" then="" we="" calculate="" the="" predicted="" output="" ypred.="" if="" the="" prediction="" is="" correct="" (ypred="y)," then="" we="" do="" nothing,="" and="" the="" weights="" are="" not="" updated.="" if="" the="" prediction="" is="" incorrect="" (ypred="" ”="y)," then="" we="" update="" the="" weights="" w="[w0," w1,="" ...,="" wm]="" using="" the="" rule:="" wi,new="wi,old" +="" (y="" ≠="" ypred)xi="" (4)="" for="" all="" features="" i="0," ...,="" m.="" this="" update="" shifts="" the="" weights="" (and="" therefore="" the="" decision="" boundary)="" toward="" correctly="" classifying="" x.="" to="" fully="" train="" the="" model,="" we="" simply="" apply="" this="" update="" rule="" to="" all="" input/output="" samples.="" usually,="" we="" will="" loop="" through="" the="" full="" dataset="" multiple="" times="" to="" iteratively="" improve="" the="" model="" further="" after="" each="" round="" of="" updates.="" define="" the="" method="" perceptron.update(self,="" x,="" y)="" satisfying="" the="" following="" criteria:="" •="" take="" parameter="" x="" which="" is="" a="" single="" input="" sample="" provided="" as="" a="" list="" or="" tuple="" •="" take="" parameter="" y="" which="" is="" a="" single="" output="" sample="" (0="" or="" 1)="" •="" initializes="" the="" weights="" to="" a="" list="" of="" 0’s="" if="" they="" are="" still="" none="" •="" performs="" a="" single="" training="" update="" using="" the="" sample="" input/output="" pair="" examples:="" in="" :="" x="[(-1," -1),="" (-5,="" -2.5),="" (-7.5,="" -7.5),="" (10,="" 7.5),="" (-2.5,="" 12.5),="" (5,="" 10),="" (5,="" 5)]="" in="" :="" y="[0," 0,="" 0,="" 1,="" 0,="" 1,="" 1]="" in="" :="" model="Perceptron(weights=[-5," 1,="" 1])="" in="" :="" model.update(x[4],="" y[4])="" in="" :="" print(model.weights)="" #="" updated="" weights="" [-6,="" 3.5,="" -11.5]="" in="" :="" model.predict1(x[4])="" #="" x[4]="" predicted="" correctly="" now!="" out:="" 0="" in="" :="" model.predict(x)="" #="" others="" are="" wrong="" now="" though="" out:="" [1,="" 1,="" 1,="" 0,="" 0,="" 0,="" 0]="" problem="" 4="" define="" the="" method="" perceptron.fit(self,="" x,="" y,="" num_iter="100)" satisfying="" the="" following="" criteria:="" •="" takes="" parameter="" x="" which="" is="" an="" iterable="" of="" input="" samples="" (each="" being="" a="" list="" or="" tuple)="" •="" take="" parameter="" y="" which="" is="" an="" iterable="" of="" output="" samples="" (each="" 0="" or="" 1)="" •="" (a)="" loops="" through="" all="" sample="" input/output="" pairs="" performing="" a="" training="" update="" for="" each="" pair="" •="" (b)="" repeats="" the="" training="" loop="" (a)="" for="" as="" many="" iterations="" as="" specified="" by="" num_iter.="" 3="" hint:="" use="" your="" method="" from="" problem="" 3.="" examples:="" in="" :="" x="[(-1," -1),="" (-5,="" -2.5),="" (-7.5,="" -7.5),="" (10,="" 7.5),="" (-2.5,="" 12.5),="" (5,="" 10),="" (5,="" 5)]="" in="" :="" y="[0," 0,="" 0,="" 1,="" 0,="" 1,="" 1]="" in="" :="" model="Perceptron()" in="" :="" model.fit(x,="" y,="" num_iter="100)" in="" :="" print(model.weights)="" #="" fitted="" weights="" [0,="" 12.5,="" -5.0]="" in="" :="" model.predict(x)="" #="" perfectly="" separable="" out:="" [0,="" 0,="" 0,="" 1,="" 0,="" 1,="" 1]="" problem="" 5="" define="" the="" method="" perceptron.score(self,="" x,="" y)="" satisfying="" the="" following="" criteria:="" •="" takes="" parameter="" x="" which="" is="" an="" iterable="" of="" input="" samples="" (each="" being="" a="" list="" or="" tuple)="" •="" take="" parameter="" y="" which="" is="" an="" iterable="" of="" output="" samples="" (each="" 0="" or="" 1)="" •="" returns="" the="" accuracy="" of="" predicting="" y="" from="" x="" using="" the="" current="" weights="" •="" accuracy="" is="" calculated="" as="" the="" proportion="" of="" correct="" predictions="" examples:="" in="" :="" x="[(-1," -1),="" (-5,="" -2.5),="" (-7.5,="" -7.5),="" (10,="" 7.5),="" (-2.5,="" 12.5),="" (5,="" 10),="" (5,="" 5)]="" in="" :="" y="[0," 0,="" 0,="" 1,="" 0,="" 1,="" 1]="" in="" :="" model="Perceptron(weights=[-5," 1,="" 1])="" in="" :="" model.score(x,="" y)="" out:="" 0.8571428571428571="" 4="">
Answered 8 days AfterApr 05, 2022

Answer To: DS 5010 Homework 4 DS 5010 Homework 4 Instructions • Submit your solutions on Canvas by the deadline...

Sharda answered on Apr 07 2022
116 Votes
SOLUTION.PDF

Answer To This Question Is Available To Download

Related Questions & Answers

More Questions »

Submit New Assignment

Copy and Paste Your Assignment Here