Overview: In this lab assignment, you will execute a data analysis workflow to manipulate and analyze real data from multiple sources. The tasks associated with the lab will require you to:...

1 answer below »


Overview:In this lab assignment, you will execute a data analysis workflow to manipulate and analyze real data from multiple sources. The tasks associated with the lab will require you to:



  • consolidate numerous daily conflict event files (~60 days) from a primary source

  • enrich the consolidated event data with additional information

  • report your results with appropriate visualization and analysis comments



The Scenario:You are currently serving as a member of the Operations Research cell supporting the J5 Director of the Joint Staff (3-Star Flag Officer). The primary task of the J5 is to propose strategies, plans and policy recommendations directly to the Chairman of the Joint Chiefs of Staff (CJCS). The current CJCS is advocating for more data analysis products from the J5 to help better inform strategic decisions in the execution of the National Military Strategy. There currently exists a stalemate among the J5 deputy directors over how to assess ongoing global conflict events. Specifically, the deputy directors continue to struggle over how to identify the current 'hottest' conflict spots in the world given(as always) ongoing, complex geo-political events. It is your job as the OR analyst to help overcome this stalemate and provide relevant analytical products to better inform these key decision makers.


Given the extremely positive feedback you received on our recent conflict event analysis research work (Pandas Practicum), we return to the ACLED as the primary source to complete our analysis task.


The Armed Conflict Location Event Data (ACLED) is a disaggregated conflict analysis and crisis mapping project.


According to the ACLED project's website:https://www.acleddata.com/



ACLED is the highest quality, most widely used, real-time data and analysis source on political violence and protest around the world. Practitioners, researchers, and governments depend on ACLED for the latest reliable information on current conflict and disorder patterns.


For this assignment, you are provided a daily ACLED data file for the dates covering 18 DEC 2019 thru 15 FEB 2020 (60 days == 60 files). These files can be found in theevent_datafolder within the lab4.zip file on the course Sakai site.



NOTE:Your office receives these files on a daily basis automatically from the source. The automated 'push' process occurs at midnight daily, which sometimes results in aduplicationof event records across daily files.



Your Task:Write the necessary Python code to read, manipulate, analyze and visualize the conflict event data in order to answer the explicit questions posed in this lab assignment worksheet. A few important notes on this assignment:



  • This notebook contains all of the questions for this lab assignment

  • You will answer ALL questions and write/execute ALL applicable code within THIS notebook.


While there is a lot of information available in the original ACLED data files (31 total columns), we have an explicit analysis interest in only some of the columns. Those 11 columns are:




  • data_id


  • event_date


  • event_type


  • sub_event_type


  • region


  • country


  • location


  • latitude


  • longitude


  • fatalities


  • iso3


Task 1a: Read all daily event files into a single dataframe calledevent_df


Consider breaking this overall effort into several steps:



  1. Automatically obtain a list of all names of csv files in the directory

  2. For a single one of these files, figure out how to read it into a pandas dataframe with only the 11 relevant columns listed above

  3. Use a loop to perform step (2) for each of these files

  4. Concatenate all the dataframes into a single dataframe


Note: you might combine some of the steps above.

In[]:

# import required modules

In[]:


Task 1b: Report summary statistics of the newly createdevent_dfdataframe


We can now examine all of the events in the consolidatedevent_dfdataframe by reporting some summary statistics of interest.Please report the following:



  • total number of unique events

  • total number of unique event types

  • total number of events by event type


Total Number of Unique Events


Report the total number of unique events contained in event_df

In[]:

# Total number of unique events in event_df.

In[]:


Total Number of Unique Event Types


Report the total number of unique event types. That is the unique number of event types observed in theevent_typecolumn.

In[]:

# Total number of unique event types in event_df.

In[]:


Total Number of Events by Event Type


Report the total number of observed events for each event type.

In[]:

# Total number of events for each event type
In[]:


Task 2: Add Geographic Combatant Command (COCOM) Context to Event Data


While ACLED data includes great geographical information (i.e.region,country,location,latitude,longitudeandiso3), the Joint Staff views the world exclusively from a combatant command (COCOM) perspective. There are 11 U.S. COCOMs (7-geographic 4-functional), but we are only focused on the geographic COCOMs. Therefore, you will need to add a label to each conflict event that correctly identifies the geographic COCOM Area of Responsibility (AOR) in which it occured. This should be accomplished via the creation of a new column withinevent_dfcalledcocom. This task, however, is not entirely straightforward at this point as we will have to incorporate additional data sources to be successful.



Map Image from: DOD Updater CC BY-SA 4.0https://en.wikipedia.org/w/index.php?curid=62620678


You have been provided a data source calledcocom_countries.csvthat provides a comprehensive list of countries that fall within each geographic combatant command. For the sake of this assignment, we will restrict the United States to just the Northern Command AOR (although it officially belongs to Indo-Pacific Command as well given Hawaii) and there is no requirement to include Space Command in our analysis at this time.


ISO 3166 Country Codes


In order to establish the necessary COCOM perspective in this lab, you ultimately will have to conduct join/merge operations between your primaryevent_dfand the providedcocom_countries.csv. The ISO codewill serveas your 'primary key' to combine these two data sources.


A few notes on the ISO 3166 standard fromhttps://www.iso.org/iso-3166-country-codes.html



The purpose of ISO 3166 is to define internationally recognised codes of letters and/or numbers that we can use when we refer to countries and subdivisions. However, it does not define the names of countries – this information comes from United Nations sources (Terminology Bulletin Country Names and the Country and Region Codes for Statistical Use maintained by the United Nations Statistics Divisions). Using codes saves time and avoids errors as instead of using a country's name (which will change depending on the language being used) we can use a combination of letters and/or numbers that are understood all over the world.

Note,however, that the chosen ISO standard is different in both theevent_dfand thecocom_countries.csv


Given the difference in provided ISO codes, you must convert alliso2codes toiso3codes for consistency. To do this, you have been provided the fileiso_3166.xmlwhich is a current version of all ISO codes provided by ISO.



In order to accomplish Task 2, one must break the problem down into explicit steps.


Task 2A: Read in thecocom_countries.csvfile as a DataFrame calledcocom_df

In[]:

# Ensure you navigate to the appropriate directory that contains cocom_countries.csv


# Your code here >>>>
In[]:


Task 2B: Read in theiso_3166.xmlfile and extract all countryiso2andiso3codes


The following code is provided for you to execute Task 2B. The resulting output is a dataframe callediso_dfwhich contains all extractediso2andiso3codes.

In[]:

# Ensure you navigate to the appropriate directory that contains the XML file


######################################################################

### REFER BACK TO WEB_DATA LECTURE TO REFRESH FAMILIARITY WITH XML ###

### NOTE: YOU DO NOT NEED TO CHANGE ANYTHING IN THIS CELL ############


import xml.etree.ElementTree as ET

tree = ET.parse('iso_3166.xml')

root = tree.getroot()


# Create empty dataframe to store extracted iso codes

iso_df = pd.DataFrame(columns=['iso2', 'iso3'])

iso_df


# List placeholders

iso2_list = []

iso3_list = []


# Extract iso2/3 for each country element

for country in root.findall('country'):


iso2 = country.get('alpha-2')


iso3 = country.get('alpha-3')



# add iso values to respective lists


iso2_list.append(iso2)


iso3_list.append(iso3)


# Add values to iso_df as new columns (after simple len test)

if len(iso2_list) == len(iso3_list):


iso_df['iso2'] = iso2_list


iso_df['iso3'] = iso3_list


# View the last 5 rows of iso_df

iso_df.tail()

Task 2C: Addiso3codes to all countries within thecocom_df


In this task, you must carefully consider how to mergeiso_dfwithcocom_df.

In[]:

# Your code here to merge these sources >>>>>>
In[]:


Task 2D: We are finally ready to add COCOM info by merging oniso3codes


NOTE: We already have acountrycolumn within theevent_df, so to avoid confusion just drop thecountrycolumn within thecocom_dfprior to merging oniso3

In[]:

# Drop country column from cocom_df >>>>>

In[]:

# Conduct merge operation >>>>>

In[]:

# Verify merge operation >>>>>


Task 3: Analyze COCOM Patterns


The primary dispute within the J5 is the inability to decide upon an appropriate conflict metric to compare thefour primary geographic COCOMs of interest: AFRICOM, CENTCOM, EUCOM, INDOPACOM.One group of J5 deputies believes thattotal events that occur within a COCOM over timeis a good barometer to assess an AOR as a dangerous 'hotspot', while other deputies believe that approach is too generic. This other competing perspective believes thattotal deaths that occur from conflict events within a COCOM over timeis a better approach.


Your specific analysis tasks are as follows (Tasks 3A-3C)


Task 3A: Visualize the total number of conflict events over time within each COCOM of interest


Please provide a line plot visualization showing a direct comparison of daily event accounts within thefour primary geographic COCOMs of interestover the entire time frame of theevent_df.


Please include the following with your plot:



  • dedicated plot line for each of the four COCOMs with individual color scheme

  • custom xlabel, ylabel and title

  • text interpreting the results of your visualization


CREATE PLOT 3A HERE

In[]:


INTERPRET PLOT 3A HERE (WITH SUPPORTING TEXT)







Task 3B: Visualize the total number of deaths that result from conflict events over time within each COCOM of interest


Please provide a line plot visualization showing a direct comparison of daily death accounts within thefour primary geographic COCOMs of interestover the entire time frame of theevent_df.


Please include the following with your plot:



  • dedicated plot line for each of the four COCOMs with individual color scheme

  • custom xlabel, ylabel and title

  • text interpreting the results of your visualization


CREATE PLOT 3B HERE

In[]:


INTERPRET PLOT 3B HERE (WITH SUPPORTING TEXT)



  • Additionally, identify the specific event producing the most fatalities.







Task 3C: Visualize the total number of events byevent_typewithin each COCOM of interest


Please create astacked bar chartvisualization that provides an individual 'stacked bar' capturing the total number of events byevent_typefor each of thefour primary geographic COCOMs of interestover the entire time frame ofevent_df.


Please include the following with your plot:



  • dedicated stacked bar for each of the four COCOMs

  • custom xlabel, ylabel and title

  • text interpreting the results of your visualization


CREATE PLOT 3C HERE

In[]:


INTERPRET PLOT 3C HERE (WITH SUPPORTING TEXT)







Task 4: Create an interactive map displaying conflict events


The J5 isEXTREMELYinterested in observable conflict events that took place around the time of the downing of a commerical airliner in Iran on 8 JAN 2020 (https://en.wikipedia.org/wiki/Ukraine_International_Airlines_Flight_752). As part of this interest, the J5 requested that you design, create, and present an interactive map visualization that plots all conflict events that occurred on January 7th and 8th within the CENTCOM AOR.


Please provide code below showing how to create an interactive map capturing CENTCOM conflict events on January 7th and 8th (2020) using thefoliumpython module. Each conflict event should result in the creation of an individualfoliumMarker that contains the following information (derived from the associated dataframe column):




  • Event


  • Location


  • Country


NOTE: There are many added features that you could consider including in your resulting folium map. At a minimum, include an individual Marker with the information listed above, while also considering centering your map on the AOR. Feel free to explore additional features to enhance your final map presentation.

Answered 2 days AfterAug 30, 2021

Answer To: Overview: In this lab assignment, you will execute a data analysis workflow to manipulate and...

Karthi answered on Sep 02 2021
149 Votes
90327/lab3notebooklastfirst-rrhwtw4t.ipynb
{
"cells": [
{
"cell_type": "markdown",
"source": [
" Lab 3, Summer 2021\n",
"\n"
],
"metadata": {}
},
{
"cell_type": "markdown",
"source": [
">> PLACE YOUR NAME HERE >>"
],
"metadata": {}
},
{
"cell_type": "markdown",
"source": [
"**Overview:** In this lab assignment, you will execute a data analysis workflow to manipulate and analyze real data from multiple sources. The tasks associated with the lab will require you to:\n",
"\n",
"* consolidate numerous daily conflict event files (~60 days) from a primary source\n",
"* enrich the consolidated event data with additional information\n",
"* report your results with appropriate visualization and analysis comments\n",
"\n",
"**The Scenario:** You are currently serving as a member of the Operations Research cell supporting the J5 Director of the Joint Staff (3-Star Flag Officer). The primary task of the J5 is to propose strategies, plans and policy recommendations directly to the Chairman of the Joint Chiefs of Staff (CJCS). The current CJCS is advocating for more data analysis products from the J5 to help better inform strategic decisions in the execution of the National Military Strategy. There currently exists a stalemate among the J5 deputy directors over how to assess ongoing global conflict events. Specifically, the deputy directors continue to struggle over how to identify the current 'hottest' conflict spots in the world given(as always) ongoing, complex geo-political events. It is your job as the OR analyst to help overcome this stalemate and provide relevant analytical products to better inform these key decision makers. \n",
"\n",
"Given the extremely positive feedback you received on our recent conflict event analysis research work (*Pandas Practicum*), we return to the ACLED as the primary source to complete our analysis task.\n",
" \n",
"#### The Armed Conflict Location Event Data (ACLED) is a disaggregated conflict analysis and crisis mapping project. \n",
"\n",
"According to the ACLED project's website: https://www.acleddata.com/\n",
" \n",
"```ACLED is the highest quality, most widely used, real-time data and analysis source on political violence and protest around the world. Practitioners, researchers, and governments depend on ACLED for the latest reliable information on current conflict and disorder patterns.```\n",
"\n",
"For this assignment, you are provided a daily ACLED data file for the dates covering 18 DEC 2019 thru 15 FEB 2020 (60 days == 60 files). These files can be found in the `event_data` folder within the lab4.zip file on the course Sakai site.\n",
"\n",
"*NOTE:* Your office receives these files on a daily basis automatically from the source. The automated 'push' process occurs at midnight daily, which sometimes results in a **duplication** of event records across daily files."
],
"metadata": {}
},
{
"cell_type": "markdown",
"source": [
"**Your Task:** Write the necessary Python code to read, manipulate, analyze and visualize the conflict event data in order to answer the explicit questions posed in this lab assignment worksheet. A few important notes on this assignment:\n",
"\n",
" * This notebook contains all of the questions for this lab assignment \n",
" * You will answer ALL questions and write/execute ALL applicable code within THIS notebook.\n"
],
"metadata": {}
},
{
"cell_type": "markdown",
"source": [
"### While there is a lot of information available in the original ACLED data files (31 total columns), we have an explicit analysis interest in only some of the columns. Those 11 columns are:\n",
" * `data_id`\n",
" * `event_date`\n",
" * `event_type`\n",
" * `sub_event_type`\n",
" * `region`\n",
" * `country`\n",
" * `location`\n",
" * `latitude`\n",
" * `longitude`\n",
" * `fatalities`\n",
" * `iso3`\n",
" "
],
"metadata": {}
},
{
"cell_type": "markdown",
"source": [
"### Task 1a: Read all daily event files into a single dataframe called `event_df`\n",
"\n",
"Consider breaking this overall effort into several steps:\n",
"1. Automatically obtain a list of all names of csv files in the directory\n",
"2. For a single one of these files, figure out how to read it into a pandas dataframe with only the 11 relevant columns listed above\n",
"3. Use a loop to perform step (2) for each of these files\n",
"4. Concatenate all the dataframes into a single dataframe\n",
"\n",
"Note: you might combine some of the steps above."
],
"metadata": {}
},
{
"cell_type": "code",
"execution_count": 2,
"source": [
"# import required modules\n",
"import pandas as pd\n",
"import os\n",
"import glob\n",
"import csv\n",
"from csv import reader"
],
"outputs": [],
"metadata": {}
},
{
"cell_type": "code",
"execution_count": 3,
"source": [
"os.chdir(\"/home/kai/Greynodes/90327/event-data\")\n",
"extension = 'csv'\n",
"all_filenames = [i for i in glob.glob('*.{}'.format(extension))]\n",
"#combine all files in the list\n",
"combined_csv = pd.concat([pd.read_csv(f) for f in all_filenames ])\n",
"# print(f)\n",
"#export to csv\n",
"combined_csv.to_csv( \"combined_csv.csv\", index=False, encoding='utf-8-sig')"
],
"outputs": [
{
"output_type": "stream",
"name": "stderr",
"text": [
"/home/kai/.local/lib/python3.6/site-packages/IPython/core/interactiveshell.py:3263: DtypeWarning: Columns (31,32,33) have mixed types.Specify dtype option on import or set low_memory=False.\n",
" if (await self.run_code(code, result, async_=asy)):\n"
]
}
],
"metadata": {}
},
{
"cell_type": "markdown",
"source": [
"### Task 1b: Report summary statistics of the newly created `event_df` dataframe\n",
"We can now examine all of the events in the consolidated `event_df` dataframe by reporting some summary statistics of interest. *Please report the following:*\n",
"\n",
"* total number of unique events\n",
"* total number of unique event types\n",
"* total number of events by event type"
],
"metadata": {}
},
{
"cell_type": "markdown",
"source": [
"#### Total Number of Unique Events\n",
"Report the total number of unique events contained in event_df"
],
"metadata": {}
},
{
"cell_type": "code",
"execution_count": 4,
"source": [
"# Total number of unique events in event_df.\n",
"read_list = ['data_id', 'event_date', 'event_type', 'sub_event_type', 'region', 'country', 'location', 'latitude', 'longitude', 'fatalities', 'iso3']\n",
"df = pd.read_csv(\"combined_csv.csv\", usecols=read_list)\n",
"df.head()"
],
"outputs"
: [
{
"output_type": "execute_result",
"data": {
"text/plain": [
" data_id event_date event_type \\\n",
"0 6746880.0 15 January 2020 Battles \n",
"1 6722561.0 15 January 2020 Battles \n",
"2 6861057.0 15 January 2020 Protests \n",
"3 6746369.0 15 January 2020 Protests \n",
"4 6746881.0 15 January 2020 Explosions/Remote violence \n",
"\n",
" sub_event_type region country location \\\n",
"0 Armed clash Europe Ukraine Zholobok \n",
"1 Armed clash Eastern Africa Somalia Bariirre \n",
"2 Peaceful protest Middle East Israel Jerusalem \n",
"3 Peaceful protest Southern Asia India Bareilly \n",
"4 Shelling/artillery/missile attack Europe Ukraine Zolote-5 \n",
"\n",
" latitude longitude fatalities iso3 \n",
"0 48.7146 38.6996 0.0 UKR \n",
"1 2.0476 44.8975 3.0 SOM \n",
"2 31.7690 35.2163 0.0 ISR \n",
"3 28.3470 79.4219 0.0 IND \n",
"4 48.6719 38.5600 0.0 UKR "
],
"text/html": [
"
\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"
data_idevent_dateevent_typesub_event_typeregioncountrylocationlatitudelongitudefatalitiesiso3
06746880.015 January 2020BattlesArmed clashEuropeUkraineZholobok48.714638.69960.0UKR
16722561.015 January 2020BattlesArmed clashEastern AfricaSomaliaBariirre2.047644.89753.0SOM
26861057.015 January 2020ProtestsPeaceful protestMiddle EastIsraelJerusalem31.769035.21630.0ISR
36746369.015 January 2020ProtestsPeaceful protestSouthern AsiaIndiaBareilly28.347079.42190.0IND
46746881.015 January 2020Explosions/Remote violenceShelling/artillery/missile attackEuropeUkraineZolote-548.671938.56000.0UKR
\n",
"
"
]
},
"metadata": {},
"execution_count": 4
}
],
"metadata": {}
},
{
"cell_type": "code",
"execution_count": 5,
"source": [
"df.isnull().sum()"
],
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
"data_id 912\n",
"event_date 912\n",
"event_type 912\n",
"sub_event_type 912\n",
"region 912\n",
"country 912\n",
"location 912\n",
"latitude 912\n",
"longitude 912\n",
"fatalities 912\n",
"iso3 414\n",
"dtype: int64"
]
},
"metadata": {},
"execution_count": 5
}
],
"metadata": {}
},
{
"cell_type": "markdown",
"source": [
"#### Total Number of Unique Event Types\n",
"Report the total number of unique event types. That is the unique number of event types observed in the `event_type` column."
],
"metadata": {}
},
{
"cell_type": "code",
"execution_count": 6,
"source": [
"# Total number of unique event types in event_df.\n",
"data = pd.read_csv(\"combined_csv.csv\")\n",
" \n",
"# storing unique value in a variable\n",
"unique_value_id = data[\"data_id\"].nunique()\n",
"unique_value_date = data[\"event_date\"].nunique()\n",
"unique_value_type = data[\"event_type\"].nunique()\n",
"unique_value_sub_type = data[\"sub_event_type\"].nunique()\n",
"unique_value_region = data[\"region\"].nunique()\n",
"unique_value_country = data[\"country\"].nunique()\n",
"unique_value_location = data[\"location\"].nunique()\n",
"unique_value_latitude = data[\"latitude\"].nunique()\n",
"unique_value_longitude = data[\"longitude\"].nunique()\n",
"unique_value_fatalities = data[\"fatalities\"].nunique()\n",
"unique_value_iso3 = data[\"iso3\"].nunique()\n",
"\n",
"# printing value\n",
"print(\"Unique Data ID: \",unique_value_id)\n",
"print(\"Unique Event Date: \",unique_value_date)\n",
"print(\"Unique Event Type: \",unique_value_type)\n",
"print(\"Unique Sub Event Type: \",unique_value_sub_type)\n",
"print(\"Unique Region: \",unique_value_region)\n",
"print(\"Unique Country: \",unique_value_country)\n",
"print(\"Unique Location: \",unique_value_location)\n",
"print(\"Unique Latitude: \",unique_value_latitude)\n",
"print(\"Unique Longitude: \",unique_value_longitude)\n",
"print(\"Unique Fatalities: \",unique_value_fatalities)\n",
"print(\"Unique ISO3: \",unique_value_iso3)\n"
],
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"Unique Data ID: 23321\n",
"Unique Event Date: 58\n",
"Unique Event Type: 6\n",
"Unique Sub Event Type: 24\n",
"Unique Region: 10\n",
"Unique Country: 93\n",
"Unique Location: 7127\n",
"Unique Latitude: 7174\n",
"Unique Longitude: 7226\n",
"Unique Fatalities: 44\n",
"Unique ISO3: 250\n"
]
},
{
"output_type": "stream",
"name": "stderr",
"text": [
"/home/kai/.local/lib/python3.6/site-packages/IPython/core/interactiveshell.py:3072: DtypeWarning: Columns (31,32,33) have mixed types.Specify dtype option on import or set low_memory=False.\n",
" interactivity=interactivity, compiler=compiler, result=result)\n"
]
}
],
"metadata": {}
},
{
"cell_type": "markdown",
"source": [
"#### Total Number of Events by Event Type\n",
"Report the total number of observed events for each event type."
],
"metadata": {}
},
{
"cell_type": "code",
"execution_count": 7,
"source": [
"# Total number of events for each event type\n",
"id_count = df['data_id'].value_counts()\n",
"event_count = df['event_date'].value_counts()\n",
"type_count = df['event_type'].value_counts()\n",
"region_count = df['region'].value_counts()\n",
"country_count = df['country'].value_counts()\n",
"sub_count = df['sub_event_type'].value_counts()\n",
"type_count = df['event_type'].value_counts()\n",
"location_count = df[\"location\"].value_counts()\n",
"latitude_count = df[\"latitude\"].value_counts()\n",
"longitude_count = df[\"longitude\"].value_counts()\n",
"fatalities_count = df[\"fatalities\"].value_counts()\n",
"iso3_count = df[\"iso3\"].value_counts()\n",
"\n",
"\n",
"print(\"counts of Data ID: \",id_count)\n",
"print(\"counts of Event Date: \",event_count)\n",
"print(\"counts of Event Type: \",type_count)\n",
"print(\"counts of Sub Event Type: \",sub_count)\n",
"print(\"counts of Region: \",region_count)\n",
"print(\"counts of Country: \",country_count)\n",
"print(\"counts of Location: \",location_count)\n",
"print(\"counts of Latitude: \",latitude_count)\n",
"print(\"counts of Longitude: \",longitude_count)\n",
"print(\"counts of Fatalities: \",fatalities_count)\n",
"print(\"counts of ISO3: \",iso3_count)"
],
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"counts of Data ID: 6719841.0 16\n",
"6708793.0 16\n",
"6815739.0 16\n",
"6713466.0 16\n",
"6815731.0 16\n",
" ..\n",
"6761574.0 8\n",
"6761577.0 8\n",
"6761559.0 8\n",
"6720247.0 8\n",
"6869414.0 8\n",
"Name: data_id, Length: 23321, dtype: int64\n",
"counts of Event Date: 05 January 2020 6224\n",
"25 December 2019 5344\n",
"29 December 2019 5248\n",
"05 February 2020 4440\n",
"20 December 2019 4280\n",
"19 December 2019 4208\n",
"28 January 2020 4088\n",
"31 January 2020 3968\n",
"21 January 2020 3744\n",
"20 January 2020 3736\n",
"29 January 2020 3728\n",
"30 January 2020 3720\n",
"15 January 2020 3680\n",
"21 December 2019 3600\n",
"16 January 2020 3568\n",
"22 January 2020 3544\n",
"27 December 2019 3544\n",
"26 December 2019 3536\n",
"03 February 2020 3520\n",
"04 February 2020 3504\n",
"26 January 2020 3408\n",
"11 February 2020 3400\n",
"06 February 2020 3384\n",
"23 December 2019 3368\n",
"27 January 2020 3360\n",
"22 December 2019 3344\n",
"08 January 2020 3304\n",
"10 February 2020 3296\n",
"24 January 2020 3288\n",
"12 February 2020 3272\n",
"13 January 2020 3232\n",
"17 January 2020 3224\n",
"13 February 2020 3216\n",
"03 January 2020 3208\n",
"23 January 2020 3176\n",
"14 January 2020 3160\n",
"10 January 2020 3128\n",
"02 February 2020 3120\n",
"19 January 2020 3120\n",
"24 December 2019 3088\n",
"01 February 2020 3056\n",
"14 February 2020 2976\n",
"06 January 2020 2968\n",
"12 January 2020 2960\n",
"08 February 2020 2944\n",
"18 January 2020 2912\n",
"11 January 2020 2896\n",
"02 January 2020 2800\n",
"30 December 2019 2752\n",
"09 January 2020 2712\n",
"31 December 2019 2704\n",
"07 January 2020 2656\n",
"01 January 2020 2616\n",
"09 February 2020 2576\n",
"07 February 2020 2416\n",
"04 January 2020 2416\n",
"28 December 2019 2400\n",
"15 February 2020 1896\n",
"Name: event_date, dtype: int64\n",
"counts of Event Type: Protests 77320\n",
"Battles 42488\n",
"Explosions/Remote violence 39960\n",
"Violence against civilians 14080\n",
"Riots 12040\n",
"Strategic developments 9088\n",
"Name: event_type, dtype: int64\n",
"counts of Sub Event Type: Peaceful protest 72304\n",
"Armed clash 39864\n",
"Shelling/artillery/missile attack 22128\n",
"Attack 12424\n",
"Air/drone strike 11816\n",
"Violent demonstration 7136\n",
"Remote explosive/landmine/IED 5400\n",
"Mob violence 4904\n",
"Protest with intervention 4592\n",
"Change to group/activity 2544\n",
"Government regains territory 2424\n",
"Looting/property destruction 2392\n",
"Abduction/forced disappearance 1496\n",
"Disrupted weapons use 1400\n",
"Arrests 1304\n",
"Other 920\n",
"Grenade 488\n",
"Excessive force against protesters 424\n",
"Agreement 272\n",
"Non-state actor overtakes territory 200\n",
"Non-violent transfer of territory 168\n",
"Sexual violence 160\n",
"Suicide bomb 128\n",
"Headquarters or base established 88\n",
"Name: sub_event_type, dtype: int64\n",
"counts of Region: Middle East 62536\n",
"Southern Asia 40064\n",
"Europe 26808\n",
"Caucasus and Central Asia 23824\n",
"Western Africa 9816\n",
"Northern Africa 9384\n",
"Eastern Africa 7960\n",
"South-Eastern Asia 7840\n",
"Middle Africa 4888\n",
"Southern Africa 1856\n",
"Name: region, dtype: int64\n",
"counts of Country: India 28568\n",
"Syria 22296\n",
"Ukraine 17968\n",
"Afghanistan 15440\n",
"Yemen 15384\n",
" ... \n",
"Rwanda 16\n",
"Republic of Congo 16\n",
"Oman 8\n",
"eSwatini 8\n",
"United Arab Emirates 8\n",
"Name: country, Length: 93, dtype: int64\n",
"counts of Location: Delhi-New Delhi 880\n",
"Ad Durayhimi 776\n",
"Hays 776\n",
"Baghdad 728\n",
"Jammu 720\n",
" ... \n",
"Al Awja 8\n",
"Bakouma 8\n",
"Rusamilae 8\n",
"Raju 8\n",
"Puthukudiyiruppu 8\n",
"Name: location, Length: 7127, dtype: int64\n",
"counts of Latitude: 28.6357 880\n",
"13.9317 776\n",
"14.6370 776\n",
"33.3248 728\n",
"32.7357 720\n",
" ... \n",
"39.0570 4\n",
"11.0210 4\n",
"26.1940 2\n",
"28.7930 1\n",
"23.6940 1\n",
"Name: latitude, Length: 7174, dtype: int64\n",
"counts of Longitude: 77.2244 880\n",
" 43.4831 776\n",
" 43.0551 776\n",
" 44.4213 728\n",
" 74.8691 720\n",
" ... \n",
" 117.1430 4\n",
" 25.1330 3\n",
" 29.8830 2\n",
"-6.8330 1\n",
" 5.8330 1\n",
"Name: longitude, Length: 7226, dtype: int64\n",
"counts of Fatalities: 0.0 163912\n",
"1.0 12104\n",
"2.0 4912\n",
"3.0 3576\n",
"10.0 2192\n",
"4.0 1872\n",
"5.0 1680\n",
"6.0 1272\n",
"7.0 680\n",
"8.0 488\n",
"9.0 424\n",
"11.0 312\n",
"12.0 200\n",
"14.0 176\n",
"18.0 168\n",
"13.0 128\n",
"15.0 120\n",
"16.0 112\n",
"20.0 96\n",
"17.0 88\n",
"21.0 56\n",
"24.0 48\n",
"30.0 48\n",
"22.0 40\n",
"23.0 32\n",
"25.0 32\n",
"27.0 24\n",
"32.0 16\n",
"35.0 16\n",
"34.0 16\n",
"26.0 16\n",
"31.0 16\n",
"40.0 16\n",
"62.0 8\n",
"88.0 8\n",
"45.0 8\n",
"36.0 8\n",
"19.0 8\n",
"73.0 8\n",
"176.0 8\n",
"111.0 8\n",
"81.0 8\n",
"166.0 8\n",
"70.0 8\n",
"Name: fatalities, dtype: int64\n",
"counts of ISO3: IND 28570\n",
"SYR 22298\n",
"UKR 17970\n",
"AFG 15442\n",
"YEM 15386\n",
" ... \n",
"COL 2\n",
"ATG 2\n",
"MCO 2\n",
"DEU 2\n",
"VAT 2\n",
"Name: iso3, Length: 250, dtype: int64\n"
]
}
],
"metadata": {}
},
{
"cell_type": "markdown",
"source": [
"### Task 2: Add Geographic Combatant Command (COCOM) Context to Event Data\n",
"While ACLED data includes great geographical information (i.e. `region`,`country`,`location`,`latitude`,`longitude` and `iso3`), the Joint Staff views the world exclusively from a combatant command (COCOM) perspective. There are 11 U.S. COCOMs (7-geographic 4-functional), but we are only focused on the geographic COCOMs. Therefore, you will need to add a label to each conflict event that correctly identifies the geographic COCOM Area of Responsibility (AOR) in which it occured. This should be accomplished via the creation of a new column within `event_df` called `cocom`. This task, however, is not entirely straightforward at this point as we will have to incorporate additional data sources to be successful. \n",
"\n",
" \n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"Map Image from: DOD Updater CC BY-SA 4.0 https://en.wikipedia.org/w/index.php?curid=62620678 \n",
"\n",
"You have been provided a data source called `cocom_countries.csv` that provides a comprehensive list of countries that fall within each geographic combatant command. For the sake of this assignment, we will restrict the United States to just the Northern Command AOR (although it officially belongs to Indo-Pacific Command as well given Hawaii) and there is no requirement to include Space Command in our analysis at this time."
],
"metadata": {}
},
{
"cell_type": "markdown",
"source": [
"#### ISO 3166 Country Codes\n",
" In order to establish the necessary COCOM perspective in this lab, you ultimately will have to conduct join/merge operations between your primary `event_df` and the provided `cocom_countries.csv`. The ISO code **will serve** as your 'primary key' to combine these two data sources.\n",
"\n",
"\n",
"A few notes on the ISO 3166 standard from https://www.iso.org/iso-3166-country-codes.html\n",
"``` \n",
"The purpose of ISO 3166 is to define internationally recognised codes of letters and/or numbers that we can use when we refer to countries and subdivisions. However, it does not define the names of countries – this information comes from United Nations sources (Terminology Bulletin Country Names and the Country and Region Codes for Statistical Use maintained by the United Nations Statistics Divisions).\n",
"\n",
"Using codes saves time and avoids errors as instead of using a country's name (which will change depending on the language being used) we can use a combination of letters and/or numbers that are understood all over the world.\n",
"```\n",
"\n",
"Note, *however*, that the chosen ISO standard is different in both the `event_df` and the `cocom_countries.csv`\n"
],
"metadata": {}
},
{
"cell_type": "markdown",
"source": [
"Given the difference in provided ISO codes, you must convert all `iso2` codes to `iso3` codes for consistency. To do this, you have been provided the file `iso_3166.xml` which is a current version of all ISO codes provided by ISO. \n",
"\n",
"**In order to accomplish Task 2, one must break the problem down into explicit steps.**\n"
],
"metadata": {}
},
{
"cell_type": "markdown",
"source": [
"### Task 2A: Read in the `cocom_countries.csv` file as a DataFrame called `cocom_df`"
],
"metadata": {}
},
{
"cell_type": "code",
"execution_count": 8,
"source": [
"# Ensure you navigate to the appropriate directory that contains cocom_countries.csv\n",
"\n",
"# Your code here >>>>\n",
"\n",
"comm_df = pd.read_csv(\"/home/kai/Greynodes/90327/cocomcountries.csv\")\n",
"\n",
"comm_df.head()"
],
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
" COCOM country iso_2\n",
"0 USCENTCOM Afghanistan AF\n",
"1 USEUCOM Albania AL\n",
"2 USAFRICOM Algeria DZ\n",
"3 USEUCOM Andorra AD\n",
"4 USAFRICOM Angola AO"
],
"text/html": [
"
\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"
COCOMcountryiso_2
0USCENTCOMAfghanistanAF
1USEUCOMAlbaniaAL
2USAFRICOMAlgeriaDZ
3USEUCOMAndorraAD
4USAFRICOMAngolaAO
\n",
"
"
]
},
"metadata": {},
"execution_count": 8
}
],
"metadata": {}
},
{
"cell_type": "code",
"execution_count": 9,
"source": [
"print(comm_df)"
],
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
" COCOM country iso_2\n",
"0 USCENTCOM Afghanistan AF\n",
"1 USEUCOM Albania AL\n",
"2 USAFRICOM Algeria DZ\n",
"3 USEUCOM Andorra AD\n",
"4 USAFRICOM Angola AO\n",
".. ... ... ...\n",
"202 USINDOPACOM Vietnam VN\n",
"203 USAFRICOM Western Sahara EH\n",
"204 USCENTCOM Yemen YE\n",
"205 USAFRICOM Zambia ZM\n",
"206 USAFRICOM Zimbabwe ZW\n",
"\n",
"[207 rows x 3 columns]\n"
]
}
],
"metadata": {}
},
{
"cell_type": "markdown",
"source": [
"### Task 2B: Read in the `iso_3166.xml` file and extract all country `iso2` and `iso3` codes \n",
"The following code is provided for you to execute Task 2B. The resulting output is a dataframe called `iso_df` which contains all extracted `iso2` and `iso3` codes."
],
"metadata": {}
},
{
"cell_type": "code",
"execution_count": 10,
"source": [
"# Ensure you navigate to the appropriate directory that contains the XML file\n",
"\n",
"######################################################################\n",
"### REFER BACK TO WEB_DATA LECTURE TO REFRESH FAMILIARITY WITH XML ###\n",
"### NOTE: YOU DO NOT NEED TO CHANGE ANYTHING IN THIS CELL ############\n",
"\n",
"import xml.etree.ElementTree as ET\n",
"tree = ET.parse('/home/kai/Greynodes/90327/iso_3166.xml')\n",
"root = tree.getroot()\n",
"\n",
"# Create empty dataframe to store extracted iso codes\n",
"iso_df = pd.DataFrame(columns=['iso2', 'iso3'])\n",
"iso_df\n",
"\n",
"# List placeholders\n",
"iso2_list = []\n",
"iso3_list = []\n",
"\n",
"# Extract iso2/3 for each country element\n",
"for country in root.findall('country'):\n",
" iso2 = country.get('alpha-2')\n",
" iso3 = country.get('alpha-3')\n",
"\n",
" # add iso values to respective lists\n",
" iso2_list.append(iso2)\n",
" iso3_list.append(iso3)\n",
"\n",
"# Add values to iso_df as new columns (after simple len test)\n",
"if len(iso2_list) == len(iso3_list):\n",
" iso_df['iso2'] = iso2_list\n",
" iso_df['iso3'] = iso3_list\n",
"\n",
"# View the last 5 rows of iso_dfcocom_countries.csv\n",
"iso_df.tail()"
],
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
" iso2 iso3\n",
"244 WF WLF\n",
"245 EH ESH\n",
"246 YE YEM\n",
"247 ZM ZMB\n",
"248 ZW ZWE"
],
"text/html": [
"
\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"
iso2iso3
244WFWLF
245EHESH
246YEYEM
247ZMZMB
248ZWZWE
\n",
"
"
]
},
"metadata": {},
"execution_count": 10
}
],
"metadata": {}
},
{
"cell_type": "markdown",
"source": [
"### Task 2C: Add `iso3` codes to all countries within the `cocom_df` \n",
"In this task, you must carefully consider how to merge `iso_df` with `cocom_df`."
],
"metadata": {}
},
{
"cell_type": "code",
"execution_count": 11,
"source": [
"# Your code here to merge these sources >>>>>>\n",
"concat_data = pd.concat([comm_df, iso_df])"
],
"outputs": [],
"metadata": {}
},
{
"cell_type": "code",
"execution_count": 12,
"source": [
"print(concat_data)"
],
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
" COCOM country iso_2 iso2 iso3\n",
"0 USCENTCOM Afghanistan AF NaN NaN\n",
"1 USEUCOM Albania AL NaN NaN\n",
"2 USAFRICOM Algeria DZ NaN NaN\n",
"3 USEUCOM Andorra AD NaN NaN\n",
"4 USAFRICOM Angola AO NaN NaN\n",
".. ... ... ... ... ...\n",
"244 NaN NaN NaN WF WLF\n",
"245 NaN NaN NaN EH ESH\n",
"246 NaN NaN NaN YE YEM\n",
"247 NaN NaN NaN ZM ZMB\n",
"248 NaN NaN NaN ZW ZWE\n",
"\n",
"[456 rows x 5 columns]\n"
]
}
],
"metadata": {}
},
{
"cell_type": "markdown",
"source": [
"### Task 2D: We are finally ready to add COCOM info by merging on `iso3` codes\n",
"NOTE: We already have a `country` column within the `event_df`, so to avoid confusion just drop the `country` column within the `cocom_df` prior to merging on `iso3`"
],
"metadata": {}
},
{
"cell_type": "code",
"execution_count": 13,
"source": [
"# Drop country column from cocom_df >>>>>\n",
"comm_df.drop('country', inplace=True, axis=1)"
],
"outputs": [],
"metadata": {}
},
{
"cell_type": "code",
"execution_count": 14,
"source": [
"print(comm_df)"
],
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
" COCOM iso_2\n",
"0 USCENTCOM AF\n",
"1 USEUCOM AL\n",
"2 USAFRICOM DZ\n",
"3 USEUCOM AD\n",
"4 USAFRICOM AO\n",
".. ... ...\n",
"202 USINDOPACOM VN\n",
"203 USAFRICOM EH\n",
"204 USCENTCOM YE\n",
"205 USAFRICOM ZM\n",
"206 USAFRICOM ZW\n",
"\n",
"[207 rows x 2 columns]\n"
]
}
],
"metadata": {}
},
{
"cell_type": "code",
"execution_count": 15,
"source": [
"# Conduct merge operation >>>>>\n",
"merge = pd.concat([comm_df, iso_df])\n",
"combination = merge.to_csv(\"concat_data.csv\", index=False, encoding='utf-8-sig')"
],
"outputs": [],
"metadata": {}
},
{
"cell_type": "code",
"execution_count": 16,
"source": [
"# Verify merge operation >>>>>\n",
"verify_merge = pd.read_csv(\"concat_data.csv\")\n",
"verify_merge.head()"
],
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
" COCOM iso_2 iso2 iso3\n",
"0 USCENTCOM AF NaN NaN\n",
"1 USEUCOM AL NaN NaN\n",
"2 USAFRICOM DZ NaN NaN\n",
"3 USEUCOM AD NaN NaN\n",
"4 USAFRICOM AO NaN NaN"
],
"text/html": [
"
\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"
COCOMiso_2iso2iso3
0USCENTCOMAFNaNNaN
1USEUCOMALNaNNaN
2USAFRICOMDZNaNNaN
3USEUCOMADNaNNaN
4USAFRICOMAONaNNaN
\n",
"
"
]
},
"metadata": {},
"execution_count": 16
}
],
"metadata": {}
},
{
"cell_type": "markdown",
"source": [
"## Task 3: Analyze COCOM Patterns\n",
"The primary dispute within the J5 is the inability to decide upon an appropriate conflict metric to compare the **four primary geographic COCOMs of interest: AFRICOM, CENTCOM, EUCOM, INDOPACOM.** One group of J5 deputies believes that **total events that occur within a COCOM over time** is a good barometer to assess an AOR as a dangerous 'hotspot', while other deputies believe that approach is too generic. This other competing perspective believes that **total deaths that occur from conflict events within a COCOM over time** is a better approach.\n",
"\n",
"Your specific analysis tasks are as follows (Tasks 3A-3C)"
],
"metadata": {}
},
{
"cell_type": "markdown",
"source": [
"### Task 3A: Visualize the total number of conflict events over time within each COCOM of interest\n",
"Please provide a line plot visualization showing a direct comparison of daily event accounts within the **four primary geographic COCOMs of interest** over the entire time frame of the `event_df`. \n",
"\n",
"Please include the following with your plot:\n",
"* dedicated plot line for each of the four COCOMs with individual color scheme\n",
"* custom xlabel, ylabel and title\n",
"* text interpreting the results of your visualization"
],
"metadata": {}
},
{
"cell_type": "markdown",
"source": [
"#### CREATE PLOT 3A HERE"
],
"metadata": {}
},
{
"cell_type": "code",
"execution_count": 17,
"source": [
"import matplotlib.pyplot as plt\n",
"import plotly.express as px"
],
"outputs": [],
"metadata": {}
},
{
"cell_type": "code",
"execution_count": 18,
"source": [
"print(\"counts of Country: \",country_count)"
],
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"counts of Country: India 28568\n",
"Syria 22296\n",
"Ukraine 17968\n",
"Afghanistan 15440\n",
"Yemen 15384\n",
" ... \n",
"Rwanda 16\n",
"Republic of Congo 16\n",
"Oman 8\n",
"eSwatini 8\n",
"United Arab Emirates 8\n",
"Name: country, Length: 93, dtype: int64\n"
]
}
],
"metadata": {}
},
{
"cell_type": "code",
"execution_count": 67,
"source": [
"\n",
"country_graph = (country_count[0], country_count[1], country_count[2], country_count[3])\n",
"def_country = [\"India\", \"Syria\", \"Ukraine\", \"Afghanistan\"]\n",
"print(country_graph)\n",
"fig, ax = plt.subplots(figsize=(10, 6))\n",
"\n",
"# Define x and y axes\n",
"ax.plot(country_graph, def_country)\n",
"ax.scatter(country_graph, def_country)\n",
"\n",
"plt.xlabel(\"Country Count in numbers\")\n",
"plt.ylabel(\"Country\")\n",
"plt.title(\"Total number of Conflict Events\")"
],
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"(28568, 22296, 17968, 15440)\n"
]
},
{
"output_type": "execute_result",
"data": {
"text/plain": [
"Text(0.5, 1.0, 'Total number of Conflict Events')"
]
},
"metadata": {},
"execution_count": 67
},
{
"output_type": "display_data",
"data": {
"text/plain": [
"
"
],
"image/svg+xml": "\n\n\n\n \n\n\n\n2021-09-01T22:34:05.008242\nimage/svg+xml\n\n\nMatplotlib v3.3.4, https://matplotlib.org/\n\n\n\n\n \n \n\n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
SOLUTION.PDF

Answer To This Question Is Available To Download

Related Questions & Answers

More Questions »

Submit New Assignment

Copy and Paste Your Assignment Here