Guidelines – Review based Projects Proposed System (COMPONENTS of your System): (Section 3 in a Sample 1) 1. Please start your writing about previous system, what is the base of their work (Their...

i need you to make 3000 words for system components from 30 journal,
10.000 words for stage 3 template from 30 journal
for the final review based project 10.000 words 60 journal.


Guidelines – Review based Projects Proposed System (COMPONENTS of your System): (Section 3 in a Sample 1) 1. Please start your writing about previous system, what is the base of their work (Their developed system based on what?). 2. What are the criteria you considered in collecting and analysing your literature? Why some past results have rejected. 3. What is the important (GOAL) of doing your project?. 4. You need to give explanation on each factor (class/component) in your work. Classify these classes, and find their relationship with the subclasses. What are the attribute for each class and subclass? 5. Justify why you consider each factor (class/component) in your classification. Subsections in your Proposed System section: · Each factor (class/component) in your classification should has subsection in proposed section. · For each subsection, give introductory about that factor (class). · Justify in details, why you use this factor in your classification. What was the problem? What the need of this factor (class), … etc. · Please also make other subsections for each factor that is related to subclasses you mentioned in in your system classification. Describe the “Domain Scenario” that associate with your system components e.g. Describe “surgical scenario” which is associated with both data and view classes. Why you need to describe the domain scenario It will allow you to describe the “dynamic systems” that change based on the end users current tasks. e.g. Surgical scenario allow us to: · Determine what type of visualisation data should be shown at a particular point in time of surgery. · Where it should be viewed · How the data may be interacted with at the step of the surgery · It describe the type of surgery, and number of surgical steps that are executed to perform surgery · What is the action need to be taken in each step, with association of accuracy and time in each step? · Define each factor (class) of your classification, and also their sub-class. Justify why each factor used for classification. How to demonstrate the USEFULNESS of your proposed system components 1. “Classify” the state of art publications that describe using the “the same technology” with the “technique” you use in your system. Check (Classification comparison - Previous Work Comparison) section guidelines. 2. “Verify” your proposed system (Check Validation and Evaluation section) based on: · “it’s goodness of fit”; e.g. how well it describes mixed reality visualisation IGS system within literature · “it’s completeness” · “it’s components” and compare it to those of “image guided surgery as an example”. Classification comparison - Previous Work Comparison: 1. You need to create a table to show the classification comparison for all previous work that you collected. This table should compare all previous solution in term of factors you considered in your work. Also you need to show what is the domain of each solution they work on it, and what was their input to the system. 2. Start your writing with subsection again with each of your factor that you mentioned in your classification table and write about it including what previous author done related to this factor. The last paragraph in each subsection should give the conclusion about your classification table. Validation and Evaluation: 1. Here you are going to validate (show the right system built and give that meet your goal you set), and also evaluate (Show the value of your system and the usefulness of it) your proposed system. This you done by relying you on the comparison that you done in previous section. 2. Show why you evaluate and validate the system. 3. What are the parameters that you are going to use in validating and evaluating your system. 4. Write about previous work how they validated and evaluated their system. Subsections in your Validation and Evaluation section: 1. Start your writing about the tool has used to evaluate the model you proposed. 2. Write about the tool that you are going to use to evaluate your model. Validation and Evaluation: Please check section 6 in given sample 1. Conclusion with Recommendations Two paras · Reiterate the purpose of the research · Summarise results/findings · Acknowledge limitations of the research focusing on methodology, the model and implementation · Suggest areas of research and the future direction What needs to be done as a result of your findings focusing on the weaknesses identified For Review Based project ONLY - Samples of Factors Have considered in previous review based project Table 1: Publicly available datasets for ABSA No Dataset and Author Domain & Language & Size Format Example URL 1 Customer review data (Hu et al., 2004) Digital products (EN): 3945 sentences Text format with tags of aspect terms and polarities (-3, -2, -1, 1, 2, 3) speaker phone[+2], radio[+2], infrared[+2] ##my favourite features , although there are many , are the speaker phone , the radio and the infrared . https://www.cs.uic.edu/~liub/FBS/sentiment-analysis.html 2 SemEval 2014 (Pontiki et al., 2014) Restaurants (EN): 3841 sentences Laptops (EN): 3845 sentences XML tag, in which two attributes ("from and "to") that indicate its start and end offset in the text Lightweight and the screen is beautiful! http://alt.qcri.org/semeval2014/task4/ 3 SemEval 2015 (Pontiki et al., 2015) Laptop (EN): 450 reviews (2500 sentences) XML tag of {E#A, polarity} Judging from previous posts this used to be a good place, but not any longer. http://alt.qcri.org/semeval2015/task12/ Restaurant (EN): 350 reviews (2000 sentences) XML tag of {E#A, OTE, polarity} Hotel (EN): 30 reviews (266 sentences) - no training data XML tag of {E#A, OTE, polarity} 4 SemEval 2016 (Pontiki et al., 2016) Laptop (EN): 530 reviews (3308 sentences) Mobile phone (CH): 200 reviews (9521 sentences) Camera (CH): 200 reviews (8040 sentences) XML tag of {E#A, polarity} Decor is charming. http://alt.qcri.org/semeval2016/task5/ Restaurant (DU): 400 reviews (2286 sentences) Mobile phone (DU): 270 reviews (1697 sentences) Restaurant (FR): 455 reviews (2429 sentences) Restaurant (RU): 405 reviews (4699 sentences) Restaurant (ES): 913 reviews (2951 sentences) Restaurant (TU): 339 reviews (1248 sentences) Hotel (AR): 2291 reviews (3309 sentences) XML tag of {E#A, OTE, polarity} 5 ICWSM 2010 JDPA Sentiment Corpus for the Automotive Domain (Kessler, Eckert, Clark, & Nicolov, 2010) Automotive & digital devices: 515 documents (19,322 sentences) XML tags ( indicate the aspect term) Mention.Person https://verbs.colorado.edu/jdpacorpus/ 6 Darmstadt Service Review Corpus (Toprak, Jakob, & Gurevych, 2010) Online university & online service review: 118 reviews (1151 sentences) MMAX format https://www.ukp.tu-darmstadt.de/fileadmin/user_upload/Group_UKP/data/sentiment-analysis/DarmstadtServiceReviewCorpus.zip 7 FiQA ABSA (Maia et al., 2018) Financial news headlines: 529 samples; financial microblogs: 774 annotated posts JSON nodes with sentiment score ranged from -1 to 1, "target" indicates opinion target, and "aspect" indicates aspect categories according to different level "1": { "sentence": "Royal Mail chairman Donald Brydon set to step down", "info": [ { "snippets": "['set to step down']", "target": "Royal Mail", "sentiment_score": "-0.374", "aspects": "['Corporate/Appointment']" } ] https://sites.google.com/view/fiqa/home 8 Target-dependent Twitter sentiment classification dataset (Dong et al., 2014) Twitter comments: training data has 6,248 tweets, and testing data has 692 tweets http://goo.gl/5Enpu7 Table 4: Application of the CNN model in the consumer review domain No Study Domain Dataset & Language Model Performance Opinion target extraction 1 Poria, Cambria, et al. (2016) 12 electronic products Hu and Liu (2004) English Deep CNN + Amazon WE + POS + LP Precision: 82 .65 - 92.75% Recall: 85.02 - 88.32% F1: 84.87 - 90.44% Laptop SemEval '14 English Precision: 86.72% Recall: 78.35% F1: 82.32% Restaurant SemEval '14 English Precision: 88.27% Recall: 86.10% F1: 87.17% 2 Feng et al. (2018) Mobile phone PM from Amazon, Jingdong, and Lynx Chinese Deep CNN + WE + POS + dependent syntactic- (explicit aspects) Precision: 77.75% Recall: 72.61% F1: 75.09% Aspect category extraction 3 Toh & Su (2016) Restaurant SemEval '16 English CNN + WE +head word + name list + word cluster F1: 75.10% Laptop SemEval '16 English F1: 59.83% 4 Ruder et al. (2016) Mobile phone SemEval '16 Dutch CNN + concatenated vectors F1: 45.55% Hotel SemEval '16 Arabic F1: 52.11% 5 Gu et al. (2017) Smartphone PM from Amazon English Multiple CNNs for each aspect F1: 72.67 - 83.74% Shirt PM from Taobao Chinese F1: 92.26 - 97.34% 6 Wu et al. (2016) Smartphone PM from Amazon English Multi-task CNN + word2vec/Wikipedia F1:71.6-81.2% Sentiment polarity 7 Gu et al. (2017) Smartphone PM from Amazon English Single CNN Acc: 84.87% (binary) Shirt PM from Taobao Chinese Acc: 98.26% (binary) 8 Ruder et al. (2016) Hotel SemEval '16 Arabic CNN + aspect tokens Acc: 82.72% Mobile phone SemEval '16 Dutch CNN + aspect tokens Acc: 83.33% 9 Du et al. (2016) Electronics PM from Amazon English Aspect specific sentiment WE + CNN Acc: 92.08% (binary) Movies and TV English Acc: 92.05% (binary) CDs and vinyl English Acc: 94.38% (binary) Clothing, shoes and jewellery English Acc: 93.22% (binary) 10 Wu et al. (2016) Smartphone PM from Amazon English Multi-task CNN+word2vec/Wikipedia Acc: 84.1% (binary) 11 Xu et al. (2017) Laptop PM from Yelp English CNN + CRF Acc: 70.90% (binary, lower than SVM model) Restaurant PM from Yelp English CNN + CRF Acc: 68.34% (binary, lower than SVM model) 12 Akhtar, Kumar, et al. (2016) 12 personal electronic products PM (Akhtar, Ekbal, & Bhattacharyya, 2016) Hindi CNN + SVM Acc: 65.96% (3-way) Table 10: Summary of model comparison CNN RNN RecNN Advantages · Ability to extract meaningful local patterns (n-grams) · Non-linear dynamics · Fast computation · Distributed hidden state that can store past computations · Ability to produce a fixed size vector that takes into account the weighted combination of all words and summarizes the sequence · Do not require large dataset · Require fewer parameters · Simpler architecture · Ability to learn tree-like structures · Ability to construct representations for any new word
Oct 01, 2020
SOLUTION.PDF

Get Answer To This Question

Related Questions & Answers

More Questions »

Submit New Assignment

Copy and Paste Your Assignment Here