1st file is the template for the project,2nd one is the reference how to show outputs and testing and3rd is the file of 1st report if u have any doubts u can get from it.
MN692 Capstone Project Report Page 3 of 6 MN692 Capstone Project Group Project Title Final Report Student Names Student IDs School of IT and Engineering Trimester x 201x Acknowledgement Signature of Students: Sign your signature here Date of Submission of Report: Put a date here clearly Table of Contents Acknowledgement2 Abstract4 Glossary and Abbreviations4 1.Introduction5 2.Project Detailed Design5 2.1Summary of Literature Review5 2.2Objectives of the Project5 2.3Detailed Design5 3Project Implementation and Evaluation5 3.1Implementations5 3.2Testing5 3.3Results of the Project6 3.4Discussions/Analysis6 4Conclusions6 References6 5Appendices6 Appendix I: Simulation Source Codes6 Appendix II: Detailed proof of theory6 Appendix III: Very Long Tables of data6 Abstract The abstract should not be more than 250 words. Describe your project, focusing on research questions and research methodology for next stage of the project. Glossary and Abbreviations 1. Introduction The introduction should describe what the project is about and what the reader should know about your project before reading the rest of the report. The introduction section should also tell the reader what to expect in the report. It should say what each section of the report will contain. For example you may say something like: In section 1 preliminary review of the topic is undertaken. Section 2 contains the literature review and state of the art as it is today. Second 3 is the design while section 4 provides simulation and discussions. Section 5 is summary. Finally references are given in Section 6. These statements help a reader to zero in to sections of your report that might be of interest to him. 2. Project Detailed Design Write all your literature review summary in this section and have in-text references as well. If you have undertaken any prior literature review in MN691 on the same topic as in MN692, you may use the review here as well. 2.1 Summary of Literature Review Since the aim of your literature review is to gain understanding of what other authors have done on this topic and for you to also gain and obtain new ideas that you could use in your project, you need to summarise the new ideas from your literature review. This section should contain highlights from the literature review section, particularly points and ideas that you wish to use for your project 2.2 Objectives of the Project State the objectives of the project as agreed with your lecturer and team (if a team is involved)? You may add insights gained from section 2.1 here. 2.3 Detailed Design Weekly schedule, Gantt chart and design methodology in detail. 3 Project Implementation and Evaluation If your project requires simulations, describe how the simulations were done, which software was used, what the inputs to the simulations are and the results you got from the simulations. This section may contain tables, graphs etc. 3.1 Implementations Describe your implementation. 3.2 Testing Every system that is implemented needs to be tested. It does not matter whether is a software project or hardware, or a whole system combining both hardware and software. In all cases test the system for performance and record your findings on how the system you just implemented has performed. Honesty is required at this stage. It is ethically required that you are honest if the system fails or performs optimally. It is not to be seen as a flaw or failure if your system does not perform as expected. Thomas Edison ran over 1000 failed experiments on his first light bulb before he got it right. 3.3 Results of the Project In this section show your major results. 3.4 Discussions/Analysis Discuss your results, explaining them to those who will read your report. Be aware that you are the expert on this project and it is your responsibility to explain your results in details. 4 Conclusions Draw your conclusions here. References Compile your reference list as used in the review and research sections. This section should not contain any reference to any article that you have not used. Use the IEEE Communications referencing format. Check the Library on this format. For examples [1] B. Klaus and P. Horn, Robot Vision. Cambridge, MA: MIT Press, 1986. [2] L. Stein, “Random patterns,” in Computers and You, J. S. Brake, Ed. New York: Wiley, 1994, pp. 55-70. [3] R. E. Kalman, “New results in linear filtering and prediction theory,” J. Basic Eng., ser. D, vol. 83, pp. 95-108, Mar. 1961. [4] L. Liu and H. Miao, "A specification based approach to testing polymorphic attributes," in Formal Methods and Software Engineering: Proceedings of the 6th International Conference on Formal Engineering Methods, ICFEM 2004, Seattle, WA, USA, November 8-12, 2004 5 Appendices If you have appendices, then include them here, for example Appendix I: Simulation Source Codes Appendix II: Detailed proof of theory Appendix III: Very Long Tables of data Title of your project Taking a Long Look at QUIC Taking a Long Look at QUIC An Approach for Rigorous Evaluation of Rapidly Evolving Transport Protocols Arash Molavi Kakhki Northeastern University
[email protected] Samuel Jero Purdue University
[email protected] David Choffnes Northeastern University
[email protected] Cristina Nita-Rotaru Northeastern University
[email protected] Alan Mislove Northeastern University
[email protected] ABSTRACT Google’s QUIC protocol, which implements TCP-like properties at the application layer atop a UDP transport, is now used by the vast majority of Chrome clients accessing Google properties but has no formal state machine specification, limited analysis, and ad-hoc evaluations based on snapshots of the protocol implementation in a small number of environments. Further frustrating attempts to evaluate QUIC is the fact that the protocol is under rapid develop- ment, with extensive rewriting of the protocol occurring over the scale of months, making individual studies of the protocol obsolete before publication. Given this unique scenario, there is a need for alternative tech- niques for understanding and evaluating QUIC when compared with previous transport-layer protocols. First, we develop an ap- proach that allows us to conduct analysis across multiple versions of QUIC to understand how code changes impact protocol effec- tiveness. Next, we instrument the source code to infer QUIC’s state machine from execution traces. With this model, we run QUIC in a large number of environments that include desktop and mobile, wired and wireless environments and use the state machine to understand differences in transport- and application-layer perfor- mance across multiple versions of QUIC and in different environ- ments. QUIC generally outperforms TCP, but we also identified performance issues related to window sizes, re-ordered packets, and multiplexing large number of small objects; further, we identify that QUIC’s performance diminishes on mobile devices and over cellular networks. CCS CONCEPTS • Networks→ Transport protocols; Network measurement; KEYWORDS QUIC, transport-layer performance Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from
[email protected]. IMC ’17, November 1–3, 2017, London, United Kingdom © 2017 Copyright held by the owner/author(s). Publication rights licensed to Association for Com- puting Machinery. ACM ISBN 978-1-4503-5118-8/17/11. . . $15.00 https://doi.org/10.1145/3131365.3131368 ACM Reference Format: Arash Molavi Kakhki, Samuel Jero, David Choffnes, Cristina Nita-Rotaru, and Alan Mislove. 2017. Taking a Long Look at QUIC. In Proceedings of IMC ’17, London, United Kingdom, November 1–3, 2017, 14 pages. https://doi.org/10.1145/3131365.3131368 1 INTRODUCTION Transport-layer congestion control is one of the most important elements for enabling both fair and high utilization of Internet links shared by multiple flows. As such, new transport-layer proto- cols typically undergo rigorous design, analysis, and evaluation— producing public and repeatable results demonstrating a candidate protocol’s correctness and fairness to existing protocols—before deployment in the OS kernel at scale. Because this process takes time, years can pass between devel- opment of a new transport-layer protocol and its wide deployment in operating systems. In contrast, developing an application-layer transport (i.e., one not requiring OS kernel support) can enable rapid evolution and innovation by requiring only changes to application code, with the potential cost due to performance issues arising from processing packets in userspace instead of in the kernel. The QUIC protocol, initially released by Google in 2013 [10], takes the latter approach by implementing reliable, high- performance, in-order packet delivery with congestion control at the application layer (and using UDP as the transport layer).1 Far from just an experiment in a lab, QUIC is supported by all Google services and the Google Chrome browser; as of 2016, more than 85% of Chrome requests to Google servers use QUIC [36].2 In fact, given the popularity of Google services (including search and video), QUIC now represents a substantial fraction (estimated at 7% [26]) of all Internet traffic. While initial performance results from Google show significant gains compared to TCP for the slowest 1% of con- nections and for video streaming [18], there have been very few repeatable studies measuring and explaining the performance of QUIC compared with standard HTTP/2+TCP [17, 20, 30]. Our overarching goal is to understand the benefits and trade- offs that QUIC provides. However, during our attempts to evaluate QUIC, we identified several key challenges for repeatable, rigor- ous analyses of application-layer transport protocols in general. First, even when the protocol’s source code is publicly available, as QUIC’s is, there may be a gap between what is publicly released and what is deployed on Google clients (i.e., Google Chrome) and 1It also implements TLS and SPDY, as described in the next section. 2Newer versions of QUIC running on servers are incompatibile with older clients, and ISPs some- times block QUIC as an unknown protocol. In such cases, Chrome falls back to TCP. https://doi.org/10.1145/3131365.3131368 https://doi.org/10.1145/3131365.3131368 IMC ’17, November 1–3, 2017, London, United Kingdom A. Molavi Kakhki et al. servers. This requires gray-box testing and calibration to ensure fair comparisons with code running in the wild. Second, explaining protocol performance often requires knowing formal specifications and state machine diagrams, which may quickly become stale due to code evolution (if published at all). As a result, we need a way to automatically generate protocol details from execution traces and use them to explain observed performance differences. Third, given that application-layer protocols encounter a potentially endless array of execution environments in the wild, we need to carefully select and configure experimental environments to determine