http://se.cs.toronto.edu/api.php?action=feedcontributions&user=128.100.4.130&feedformat=atom
SE Wiki - User contributions [en]
2019-09-16T23:33:11Z
User contributions
MediaWiki 1.22.2
http://se.cs.toronto.edu/index.php/MatchTool/
MatchTool/
2010-12-22T20:38:49Z
<p>128.100.4.130: </p>
<hr />
<div>MatchTool is a tool for matching hierarchical state machines. Please refer to http://www.cs.toronto.edu/~shiva/MatchTool/ for further information.</div>
128.100.4.130
http://se.cs.toronto.edu/index.php/Yinghua_Jia
Yinghua Jia
2010-12-22T20:37:29Z
<p>128.100.4.130: </p>
<hr />
<div> <br />
<br />
* Home<br />
* People<br />
* Publications<br />
* Theses<br />
* Courses<br />
* Projects<br />
* Events<br />
* Info for New Students<br />
* Reading Group<br />
* Software<br />
* Funding<br />
* FM Links<br />
<br />
<br />
PhD :: Yinghua Jia<br />
<br />
Personal PagePublications<br />
==Interests==<br />
inconsistency management feature interaction<br />
<br />
==Contact Information==<br />
<br />
<br />
==PHD Thesis==<br />
<br />
[ 2003 -- Current ] N/A<br />
==Advisors==<br />
<br />
<br />
* Steve Easterbrook<br />
<br />
<br />
==MSC Thesis==<br />
<br />
[ 2002 -- 2003 ] Managing Feature Interactions at Run-time (advisor: Joanne M. Atlee, University of Waterloo)<br />
<br />
==Publications==<br />
<br />
<br />
2003<br />
<br />
===2003===<br />
<br />
<br />
* Y. Jia. ``Run-time Management for Feature Interactions'', in 6th Workshop on Component-Based Software Engineering joint held with 25th ICSE, May 3-9, 2003<br />
<br />
<br />
<br />
For questions and suggestions contact the webmaster<br />
Formal Methods Group, CS Department, University of Toronto 2004<br />
<br />
[[Category:PhD Student]]</div>
128.100.4.130
http://se.cs.toronto.edu/index.php/XCheck
XCheck
2010-12-22T20:37:28Z
<p>128.100.4.130: </p>
<hr />
<div>==XChek==<br />
<br />
XChek is a multi-valued symbolic model checker. It is a generalization of an existing symbolic model checking algorithm to an algorithm for amulti-valued extension of CTL (XCTL). Given a system and a XCTL property, XChek returns the degree to which the system satisfies the property.<br />
<br />
The main features of Chek are the following:<br />
* Multi-valued logics: 2-valued, 3-valued, upset, 4-valued disagreements, etc. Users can define their own logics.<br />
* Multiple model formats: reads SMV and GCLang models. Multi-valued models are specified in XML.<br />
* Proof-like counterexamples: proof rules are used to generate counterexamples. Counterexamples can be generated statically or dynamically, and various viewers are available.<br />
<br />
Website: [http://www.cs.toronto.edu/fm/xchek/index.html XChek and user manual]</div>
128.100.4.130
http://se.cs.toronto.edu/index.php/Wendy_Liu
Wendy Liu
2010-12-22T20:37:26Z
<p>128.100.4.130: </p>
<hr />
<div><br />
<br />
<br />
* Home<br />
* People<br />
* Publications<br />
* Theses<br />
* Courses<br />
* Projects<br />
* Events<br />
* Info for New Students<br />
* Reading Group<br />
* Software<br />
* Funding<br />
* FM Links<br />
<br />
<br />
PhD :: Wendy Liu<br />
<br />
Personal PagePublications<br />
==Interests==<br />
Requirements Engineering and Software Architectures<br />
<br />
==Contact Information==<br />
<br />
<br />
==PHD Thesis==<br />
<br />
[ 2002 -- Current ]<br />
==Advisors==<br />
<br />
<br />
* Steve Easterbrook<br />
<br />
<br />
==MSC Thesis==<br />
<br />
[ 2000 -- 2002 ] Rule-Based Detection of Inconsistency in Software Design<br />
==Advisors==<br />
<br />
<br />
* Marsha Chechik<br />
<br />
<br />
==Publications==<br />
<br />
<br />
2003 2002 2001 previous<br />
<br />
===2003===<br />
<br />
<br />
* W. Liu and S. M. Easterbrook. ``Eliciting Architectural Decisions from Requirements using a Rule-based Framework'', in Proceedings of the Second International Workshop From Software Requirements to Architectures (STRAW'03), co-located with ICSE 2003, Portland, Oregon, May, 2003<br />
<br />
* W. Liu and S. M. Easterbrook. ``Rule-based Inconsistency Management'', CSRG Technical Report, Department of Computer Science, University of Toronto, August, 2003<br />
<br />
* W. Liu. ``Democracy in Requirements Negotiation: a survey of social choice theory and its impact on requirements negotiation'', CSRG Technical Report, Department of Computer Science, University of Toronto, June, 2003<br />
<br />
<br />
===2002===<br />
<br />
<br />
* W. Liu, S. M. Easterbrook, and J. Mylopoulos. ``Rule-Based Detection of Inconsistency in UML Models'', in the Workshop on Consistency Problems in UML-Based Software Development, at the Fifth International Conference on the Unified Modeling Language, Dresden, Germany, October, 2002<br />
<br />
* W.Liu. `` Rule-Based Detection of Inconsistency in Software Design'' , Master Thesis, Department of Computer Science, University of Toronto, Toronto, Ontario, Canada, July, 2002<br />
<br />
<br />
===2001===<br />
<br />
<br />
* W. Liu. ``Goal Directed Requirements Acquisition for an Industrial Case Study: a Dental Adjudication System'', CSRG Technical Report, Department of Computer Science, University of Toronto, December, 2001<br />
<br />
* W. Liu. ``Dental Adjudication Model - An Application of i* Framework'', CSRG Technical Report, Department of Computer Science, University of Toronto, April, 2001<br />
<br />
* W. Liu. ``Multi-Valued Symbolic LTL Model Checker'', CSRG Technical Report, Department of Computer Science, University of Toronto, March, 2001<br />
<br />
* W. Liu. ``Bridging Software Requirements and Architecture'', CSRG Technical Report, Department of Computer Science, University of Toronto, March, 2001<br />
<br />
* W. Liu. ``The Social Role of University ? a UML Model'', CSRG Technical Report, Department of Computer Science, University of Toronto, February, 2001<br />
<br />
<br />
==previous==<br />
<br />
<br />
* W. Liu. ``Report on Construction Heuristics for the Traveling Salesman Problem'', CSRG Technical Report, Department of Computer Science, University of Toronto, December, 2000<br />
<br />
<br />
<br />
For questions and suggestions contact the webmaster<br />
Formal Methods Group, CS Department, University of Toronto 2004</div>
128.100.4.130
http://se.cs.toronto.edu/index.php/Verisec_Suite
Verisec Suite
2010-12-22T20:37:25Z
<p>128.100.4.130: </p>
<hr />
<div>==The Verisec Suite==<br />
<br />
Part of the [[Verisec:_Verification_for_Security|Verisec]] project, the Verisec suite is a benchmarking suite of code examples, suitable for evaluating software model-checkers and other static analysis tools. It is based on snippets of open source programs which contained buffer overflow vulnerabilities, as well as the corresponding patched versions.<br />
<br />
Version 0.1 is the original version, released shortly after publication of our ASE'07 paper. Version 0.2 was released in October 2008, and fixes a number of errors discovered by us and others in the original distribution.<br />
<br />
===Download===<br />
<br />
Version 0.1 : [http://www.cs.toronto.edu/~kelvin/benchmark/verisec-r421.tar.gz verisec-r421.tar.gz]<br />
<br />
Version 0.2 : [http://www.cs.toronto.edu/~tomhart/verisec-r525.tar.gz verisec-r525.tar.gz]</div>
128.100.4.130
http://se.cs.toronto.edu/index.php/Verisec:_Verification_for_Security
Verisec: Verification for Security
2010-12-22T20:37:24Z
<p>128.100.4.130: </p>
<hr />
<div><!-- this is a sample using Template:Project <br />
Start with two curly braces {{ and the template name Project.<br />
Then tell the template what named params to use, separated by vert lines.<br />
Possible params: over(view), news, pi, faculty, students, collab, funding --> <br />
{{Project|<br />
over=The Verisec project aimed to apply advances in the field of formal verification to computer security. A major focus of the project was on using software model checking to verify the absence of ''implementation-level'' vulnerabilities in computer programs; notably, buffer overflows. We have developed a suite of code examples based on buffer overflows in open source software, called the ''Verisec suite'', and have developed a technique based on ''proof templates'' to make the verification of the absence of buffer overflows more practical for software model checkers based on predicate abstraction and refinement.|<br />
news=|<br />
pi=[[Marsha_Chechik|Marsha Chechik]] and [http://www.eecg.utoronto.ca/~lie/ David Lie]|<br />
students=<br />
* [http://www.cs.toronto.edu/~tomhart Thomas E. Hart]<br />
* Kelvin Ku|<br />
collab=<br />
*[http://www.sei.cmu.edu/staff/arie/ Arie Gurfinkel] --- SEI, CMU (UofT alumnus)|<br />
tool=<br />
* [[PtYasm|PtYasm]]<br />
* [[Verisec_Suite|The Verisec Suite]]|<br />
pubs=<br />
* '''A Buffer Overflow Benchmark for Software Model Checkers'''. Kelvin Ku, Thomas E. Hart, Marsha Chechik, and David Lie. In Proceedings of the 22nd IEEE/ACM International Conference on Automated Software Engineering (ASE'07), Atlanta, Georgia, USA, November 2007.<br />
* '''Augmenting Counterexample-Guided Abstraction Refinement with Proof Templates'''. Thomas E. Hart, Kelvin Ku, David Lie, Marsha Chechik, and Arie Gurfinkel. In Proceedings of the 23rd IEEE/ACM International Conference on Automated Software Engineering (ASE'08), L'Aquila, Italy, September 2008.<br />
* '''PTYASM: Software Model Checking with Proof Templates'''. Thomas E. Hart, Kelvin Ku, David Lie, Marsha Chechik, and Arie Gurfinkel. In Proceedings of the 23rd IEEE/ACM International Conference on Automated Software Engineering (ASE'08) --- Tools Demonstrations Track, L'Aquila, Italy, September 2008.<br />
* '''Security Benchmarking using Partial Verification'''. Thomas E. Hart, Marsha Chechik, and David Lie. In Proc. 3rd USENIX Workshop on Hot Topics in Security (HotSec'08), San Jose, California, USA, July 2008.<br />
|<br />
funding=MITACS and NSERC|<br />
}}</div>
128.100.4.130
http://se.cs.toronto.edu/index.php/VaqTree
VaqTree
2010-12-22T20:37:23Z
<p>128.100.4.130: </p>
<hr />
<div>==VaqTree==<br />
<br />
VaqTree is a tool that efficiently exploits the resolution proof produced by a successful run of BMC to detect vacuity of LTL properties. The goal of this tool is to detect a significant amount of vacuity when compared to the naive method, with faster runtimes. Results will be presented at FMCAD '07.<br />
<br />
Website: [http://www.cs.toronto.edu/~jsimmond/VaqTree VaqTree, test data and results]</div>
128.100.4.130
http://se.cs.toronto.edu/index.php/Types
Types
2010-12-22T20:37:22Z
<p>128.100.4.130: </p>
<hr />
<div>== Next week's topic ==<br />
Chapter 22 of Types and Programming Languages<br />
<br />
== Suggested Topics ==<br />
Below is a list of topics that have been suggested for future meetings, in no particular order:<br />
* Constructive type theory: a perspective from computing science. (by: Roland Backhouse. in: Formal Development of Programs and Proofs, ed. Dijkstra, 1990)<br />
* [http://homepages.inf.ed.ac.uk/wadler/topics/blame.html Well-typed programs can't be blamed]<br />
* [http://research.microsoft.com/en-us/um/people/simonpj/papers/constraints/index.htm Modular type inference with local assumptions]<br />
* "First Order Logic with Dependent Types" by Florian Rabe.<br />
<br />
=== General topics ===<br />
* Effect systems<br />
* Subtyping<br />
* Parametric polymorphism / System F<br />
* Intersection types<br />
* Recursive types<br />
<br />
== Previous ==<br />
* 10/26/2010: Chapter 21 of TAPL<br />
* 10/19/2010: Chapter 21 (up to Regular Trees) of TAPL<br />
* 10/12/2010: Chapter 20 of TAPL<br />
* 10/05/2010: Albert gave a talk on predicate subtyping in PVS; Chapter 15 of TAPL<br />
* 09/28/2010: Chapters 8 and 9 of Types and Programming Languages<br />
* 06/01/2010: [http://www.lix.polytechnique.fr/coq/a-short-introduction-to-coq Tutorial on the Coq proof assistant]<br />
* 05/25/2010: [http://portal.acm.org/citation.cfm?id=45065 Abstract types have existential type]<br />
* 05/18/2010: [http://homepages.inf.ed.ac.uk/wadler/topics/linear-logic.html#linear-types Linear types can change the world!]<br />
* 05/11/2010: [http://homepages.inf.ed.ac.uk/wadler/topics/parametricity.html#free Theorems for free!]<br />
* 04/27/2010: A theory of type polymorphism, by Robin Milner<br />
* 04/20/2010: Type systems, by Luca Cardelli</div>
128.100.4.130
http://se.cs.toronto.edu/index.php/Tools
Tools
2010-12-22T20:37:20Z
<p>128.100.4.130: </p>
<hr />
<div>==XChek==<br />
<br />
XChek is a multi-valued symbolic model checker. It is a generalization of an existing symbolic model checking algorithm to an algorithm for amulti-valued extension of CTL (XCTL). Given a system and a XCTL property, XChek returns the degree to which the system satisfies the property.<br />
<br />
The main features of Chek are the following:<br />
* Multi-valued logics: 2-valued, 3-valued, upset, 4-valued disagreements, etc. Users can define their own logics.<br />
* Multiple model formats: reads SMV and GCLang models. Multi-valued models are specified in XML.<br />
* Proof-like counterexamples: proof rules are used to generate counterexamples. Counterexamples can be generated statically or dynamically, and various viewers are available.<br />
<br />
Website: [http://www.cs.toronto.edu/fm/xchek/index.html XChek and user manual]<br />
<br />
==VaqTree==<br />
<br />
VaqTree is a tool that efficiently exploits the resolution proof produced by a successful run of BMC to detect vacuity of LTL properties. The goal of this tool is to detect a significant amount of vacuity when compared to the naive method, with faster runtimes. Results will be presented at FMCAD '07.<br />
<br />
Website: [http://www.cs.toronto.edu/~jsimmond/VaqTree VaqTree, test data and results]<br />
<br />
==TReMer==<br />
<br />
TReMer (Tool for Relationship-Driven Model Merging) is a proof-of-concept tool for merging models constructed during a collaborative model-based development process. The tool offers two different approaches for merge, one for merging conceptual models in requirements elicitation, and another for merging behavioural models used in design and implementation. The former approach provides a generic solution based on category theory for combining structural models represented as graphs; and the latter provides a semantics-based solution for constructing behaviour-preserving merges of Statecharts models. <br />
<br />
WebSite: [http://www.cs.toronto.edu/~mehrdad/tremer/ TReMer and Case-Studies]</div>
128.100.4.130
http://se.cs.toronto.edu/index.php/Test
Test
2010-12-22T20:37:19Z
<p>128.100.4.130: </p>
<hr />
<div>Say whatever I want<br />
<i>Check the contributor </i></div>
128.100.4.130
http://se.cs.toronto.edu/index.php/Teaching
Teaching
2010-12-22T20:37:18Z
<p>128.100.4.130: </p>
<hr />
<div>This list is not exhaustive, totally correct, or up to date. <br />
==Graduate==<br />
* CS2104/465 - [http://www.cs.toronto.edu/%7Ecsc465h Formal Methods of Program Design]<br />
* CS2107/488 - [http://www.cs.toronto.edu/%7Ecsc488h Interpreters and compilers]<br />
* CS2108 - [http://www.cs.toronto.edu/%7Ecsc2108h Automated verification]<br />
* CS2106 - [http://www.cs.toronto.edu/%7Ecsc2106h Requirements engineering]<br />
* CS2123 - [http://www.cs.toronto.edu/%7Ecsc2123h Managing the Software Organization]<br />
* CS2124 - Topics in programming languages<br />
* CS2125 - [http://www.swc.scipy.org/ Topics in software engineering]<br />
* CS2130 - [http://www.cs.toronto.edu/%7Esme/CSC2130/index.html Empirical research methods in software engineering]<br />
* CS2507 - [http://www.cs.toronto.edu/%7Ecsc2507h Information modeling]<br />
<br />
A more complete list of courses can be found at our [http://web.cs.toronto.edu/dcs/index.php?section=87 graduate department page]. We tend to cover courses in areas 1a (Programming: Languages and Methodology) and 3c (Information Systems).<br />
<br />
==Undergraduate==<br />
Numerous courses, including introductory and advanced programming courses. Many upper-level undergrad courses are cross-listed with our grad classes.</div>
128.100.4.130
http://se.cs.toronto.edu/index.php/TReMer%2B
TReMer+
2010-12-22T20:37:17Z
<p>128.100.4.130: </p>
<hr />
<div>[[Image:tremer.png]]<br />
<br />
== Overview ==<br />
<br />
TReMer+ is a tool for merging and consistency checking of distributed models (i.e., models developed by distributed teams).<br />
<br />
The model merging framework in TReMer+ comes from an earlier conception of the tool, TReMer ('''T'''ool for '''Re'''lationship-Driven Model '''Mer'''ging). The framework includes an operator for merging conceptual models in requirements modelling, and another for merging behavioural models in design and implementation. These operators have been described in the following papers, respectively:<br />
<br />
* Mehrdad Sabetzadeh and Steve Easterbrook, "View Merging in the Presence of Incompleteness and Inconsistency". Requirements Engineering Journal, Volume 11, Number 3, June 2006 [http://www.cs.toronto.edu/~mehrdad/pub/rej06.pdf PDF]<br />
* Shiva Nejati, Mehrdad Sabetzadeh, Marsha Chechik, Steve Easterbrook, and Pamela Zave, "Matching and Merging of Statecharts Specifications". 29th International Conference on Software Engineering (ICSE'07), Minneapolis, USA, May 2007. [http://www.cs.toronto.edu/~shiva/research/pub/icse07.pdf PDF]<br />
<br />
Details about the implementation of our merge operators can be found in the following paper:<br />
* Mehrdad Sabetzadeh, Shiva Nejati, Steve Easterbrook, and Marsha Chechik, "A Relationship-Driven Framework for Model Merging". Workshop on Modeling in Software Engineering (MiSE'07) at the 29th International Conference on Software Engineering, Minneapolis, USA, May 2007. [http://www.cs.toronto.edu/~mehrdad/pub/MISE07.pdf PDF]<br />
<br />
Further, TReMer+ utilizes model merging to provide a flexible platform for consistency checking of distributed models. Our consistency checking approach has been described in the following paper:<br />
<br />
* Mehrdad Sabetzadeh, Shiva Nejati, Sotirios Liaskos, Steve Easterbrook, and Marsha Chechik, "Consistency Checking of Conceptual Models via Model Merging". 15th IEEE International Requirements Engineering Conference (RE'07), New Delhi, India, October 2007. [http://www.cs.toronto.edu/~mehrdad/pub/RE07.pdf PDF].<br />
<br />
Tool support for this approach is described in the following paper:<br />
<br />
* Mehrdad Sabetzadeh, Shiva Nejati, Steve Easterbrook, and Marsha Chechik, "Global Consistency Checking of Distributed Models with TReMer+". 30th International Conference on Software Engineering (ICSE'08), Leipzig, Germany, May 2008. Formal Research Demonstration. [http://www.cs.toronto.edu/~mehrdad/pub/ICSE08Tool.pdf PDF]<br />
<br />
For consistency checking, TReMer+ makes use of CrocoPat, a tool developed by [http://www.cs.sfu.ca/~dbeyer/ Dirk Beyer] and Andreas Noack for querying and manipulating first order relations. The user interface component of TReMer+ is based on the graph visualization and layout technologies from [http://www.jgraph.com/ JGraph].<br />
<br />
<br />
== Project Members ==<br />
* [http://www.cs.toronto.edu/~mehrdad Mehrdad Sabetzadeh]<br />
* [http://www.cs.toronto.edu/~shiva Shiva Nejati]<br />
* [http://www.cs.toronto.edu/~sme Steve Easterbrook]<br />
* [http://www.cs.toronto.edu/~chechik Marsha Chechik]<br />
<br />
<br />
== System Requirements ==<br />
<br />
* Java 1.5 or better (TReMer+ does not work with Java 1.4 and earlier.)<br />
** We have checked compatibility only with Sun's Java distribution for Linux and Windows, and Apple's Java distribution for Mac OS X. Mileage may vary with other Java distributions. If you cannot get TReMer+ to work on your platform, please let us know.<br />
<br />
<br />
== Download ==<br />
* TReMer+ can be downloaded [http://www.cs.toronto.edu/~mehrdad/tremer/TReMerPlus.jar here] (last modified: September 2008).<br />
* To use TReMer+ for consistency checking, you also need to download and install [http://www.cs.sfu.ca/~dbeyer/CrocoPat/download.html CrocoPat 2.1.3].<br />
<br />
''The source code for TReMer+ is available upon request under GNU GPL (General Public License).''<br />
<br />
<br />
== Usage==<br />
java -jar TReMerPlus.jar<br />
<br />
The above invokes the graphical user interface of TReMer+. Through this interface you can create new or open existing projects. Within each project, you can specify models, build relationships between them, hypothesize and compute merges, and perform automatic consistency checking. The GUI currently supports state machines, simple UML domain models, and Entity-Relationship diagrams.<br />
<br />
<br />
== Examples and Case Studies ==<br />
* [http://www.cs.toronto.edu/~mehrdad/tremer/examples.zip Simple Demonstration Examples]<br />
* [http://www.cs.toronto.edu/~shiva/tremer/att.zip AT&T Telecom Features Case-Study]<br />
* [http://www.cs.toronto.edu/~mehrdad/tremer/hospital.zip Hospital Domain Modelling Case-Study]<br />
<br />
<br />
== Flash Demos ==<br />
* [http://www.cs.toronto.edu/~mehrdad/tremer/demo1/ Demo I]<br />
* [http://www.cs.toronto.edu/~mehrdad/tremer/demo2/ Demo II]<br />
* [http://www.cs.toronto.edu/~mehrdad/tremer/demo3/ Demo III]<br />
* [http://www.cs.toronto.edu/~mehrdad/tremer/demo4/ Demo IV]<br />
<br />
== Consistency Checking Performance ==<br />
* [http://www.cs.toronto.edu/~mehrdad/tremer/performance.zip Test data]<br />
<br />
== Bug Reports ==<br />
Bug reports are very welcome. Please direct them to {mehrdad | shiva} @cs.toronto.edu<br />
<br />
== Disclaimer ==<br />
TReMer+ is provided with no warranty. Use at your own risk.</div>
128.100.4.130
http://se.cs.toronto.edu/index.php/TReMer
TReMer
2010-12-22T20:37:15Z
<p>128.100.4.130: </p>
<hr />
<div>'''This version has been superseded by [[TReMer+]].'''</div>
128.100.4.130
http://se.cs.toronto.edu/index.php/StudentTemplate
StudentTemplate
2010-12-22T20:37:14Z
<p>128.100.4.130: </p>
<hr />
<div><!-- this is a sample using Template:Student<br />
Start with two curly braces {{ and the template name Student.<br />
Then tell the template what named params to use, separated by vert lines.<br />
Possible params: interest, bio, status, super, talks, awards, contact, projects, teach, cv --> <br />
{{Student|<br />
interest=Some of my research interests|<br />
bio=|<br />
status=Ph.d. Candidate|<br />
super=[[Marsha_Chechik|Marsha Chechik]]|<br />
talks=|<br />
awards=|<br />
contact=[mailto:someone@anytown.com my email]|<br />
projects=<br />
* [[BPEL]]|<br />
teach=|<br />
cv=|<br />
}}</div>
128.100.4.130
http://se.cs.toronto.edu/index.php/Student1
Student1
2010-12-22T20:37:13Z
<p>128.100.4.130: </p>
<hr />
<div>[[Category:PhD Student]]</div>
128.100.4.130
http://se.cs.toronto.edu/index.php/Steve_Easterbrook
Steve Easterbrook
2010-12-22T20:37:12Z
<p>128.100.4.130: </p>
<hr />
<div>== Interests ==<br />
Inconsistencies during development process, software requirements elicitation/specification.<br />
<br />
== Contact ==<br />
Steve is located in BA3259. Call 8-3610 to get in touch if the main entrance is closed.<br />
===Website===<br />
http://www.cs.toronto.edu/~sme/<br />
===DBLP===<br />
http://www.informatik.uni-trier.de/~ley/db/indices/a-tree/e/Easterbrook:Steve_M=.html<br />
<br />
<br />
<br />
[[Category:Professor]]</div>
128.100.4.130
http://se.cs.toronto.edu/index.php/Solid_Intuition/Unification
Solid Intuition/Unification
2010-12-22T20:37:11Z
<p>128.100.4.130: </p>
<hr />
<div>==Instantiation of Laws==<br />
<br />
In order to make proving a pure point-and-click activity, instantiation of laws has to be done automatically. Fortunately, given a focus expression and one side of a law, it is a purely automatic process to match those two expressions and instantiate the ∀-quantified variables in the law. This is what unification does! In instantiating quantified variables the law of generalisation is implicitly applied. In formal on-paper proofs this application of the law is never mentioned and taking for granted (especially since the laws are often not even written in quantified form).<br />
<br />
==Instantiation of Variables in the Focus==<br />
<br />
To use the Specialisation and <br />
The laws<br />
Unification is also used when we want to apply</div>
128.100.4.130
http://se.cs.toronto.edu/index.php/Solid_Intuition/Types
Solid Intuition/Types
2010-12-22T20:37:09Z
<p>128.100.4.130: </p>
<hr />
<div>APToP generalises the notion of a function to a ''scope'' which is used for functions and quantifiers. All variables are bound inside a scope. All identifiers which are not variables denote constants. Examples for constants are T, ⊥, 0, 1, +, - and all other other symbols which have a fixed meaning in the theory. From a minimalist stand-point a constant is the same as a variable whose introduction we cannot see (i.e. we cannot zoom out of its scope). During proofs we can zoom into a scope and then the this scope's variables also behave like constants. The difference between variables and constants is that variables can be instantiated. But since the rule of instantiation says that all instances of a variable have to be instantiated with the same expression, we can only do it when we have zoomed out far enough.<br />
<br />
To further generalize our model of expressions, all operators are treated like variables which have a function type. <br />
<br />
All variables are bunches although many are elementary bunches ("elements" for short) and this distinction is crucial. All variables are also strings although some variables are one item strings ("items" for short); this distinction is also important but not as omnipresent in the proof rules. (Note: the "proof rules" are our only semantics, since we don't attach any other meaning to an expression.)<br />
The reader might wonder how a variable can both be a bunch and a string and smell a contradiction here. But there is no contradiction as long as we do not claim that a bunch cannot also be a string and vice versa. Indeed we will say that the bunch and string creation operators (<tt>'''null , ´ nil ;'''</tt>) distribute among each other and so a bunch of strings becomes a string of bunches. <br />
For the vast majority of functions (scopes) arguments are elements (and not necessarily items) which means that we can distribute all functions over bunches without any harm. It also means that all ∀ quantifications are over elements.<br />
Now, again the reader may ask how some variables can be non-elements at all, since we said above that all variables can be seen as having a scope somewhere. Well, we have a second kind of function whose arguments are any kind of bunch (again, they can be an item or not) and whose application does not distribute over bunches. This seems to work fine.<br />
<br />
(I guess, the reader can see from this, how the question of itemness is not that important: functions never distribute over strings. Indeed the only thing that distributes over strings is bunch-operators, but that's because string operators (just like most operators of the theory) distribute over bunches. Strings might seem a little scary because they initially look like bunches, but actually they are pretty tame. (Not lame, though!))<br />
<br />
Finally, all program-variables are both items and elements. Programming is relatively simple; sets and lists exist as ordinary data structures. (And as soon as we have invented modules the programmer can choose a different implementation for every set and list that he wants to use.)<br />
<br />
==Discovering element-hood==<br />
<br />
To use the law of function application < x -> expr > arg = (replace x in expr with arg), we have to show that "arg" is an element. This check is done together with type-checking (see next section) and therefore starts all the way down from the leaves of an expression. <br />
<br />
First of all, there's a caveat which holds for element-hood checking as well as for type-checking: both are sound and incomplete. For instance a solution-bunch will always check as non-elementary creating a proof obligation for the user to discharge, if she truly believes that this bunch has exactly one element. Likewise, in a context that contains x:int and x>0, it is left to the user to show x:nat. (This is much like PVS.)<br />
<br />
For purposes of checking element-hood there are three kinds of variables and functions: the easiest are functions which distribute over bunches. E-checking can simply skip over them and check that all their arguments are elements. Another kind are those who are not (necessarily) elements, such as "nat", "a,b", "null" and the solution quantifier. The third kind the most important: functions that take bunch arguments and return elements. Those are very view in the book (namely ":", "=", "{}", "¢") plus any that users may add. Now to check element-hood we simply start with the non-functions variables in the leaves of an expression (whose element-hood is known from the scope through which they are introduced) and iterate down. All we need to check is that every (possibly) non-elementary expression is elementified by one of the functions which do so.<br />
<br />
==Type checking==<br />
<br />
Our theory, aPToP, uses the bunch inclusion operator ":" to make statements about types. For non-function expressions the right-hand side of a typing statement is just an ordinary bunch. For function expressions the right-hand side is another function; there is an axiom defining what ":" means when comparing functions and expressions like "int→bool" are just shorthands for functions. Every typing statement is an ordinary expression of the theory and can be proven using ordinary proof techniques. <br />
The use of ":" means that is very natural for a variable to have many times. For example, since "nat : int" for any "x : nat" we also have "x : int" by simple transitivity. Also we can have different typing axioms for one and the same function, e.g. "_+_ : nat→nat→nat" and "_+_ : int→int→int" (although neither type is included in the other). This is all for free without the need of special rules to reason about it.<br />
<br />
When using aPToP in practice, usually all typing is implicit since it is trivial and doesn't interfere with the interesting parts of a proof. Only in rare circumstances do we have to think about the validity of a variable instantiation. Our type checker embodies this by discharging most typing side-conditions via some very simple rules described here. All other typing conditions will simply be left as proof obligations to the user. (With successfully typed subexpressions as antecedents and full context from the surrounding proof, of course.)<br />
<br />
As in the e-check, all a type-checker has to do is to start from the leaves of an expression, which all have type statements, because they were introduced via some scope and then work its way down using the following simple law about types: "x:A ∧ f:A→B ⇒ f x : B". To make type-check bother-free for the user, this law is hard-wired into the type-checker. From a semantic standpoint, however, it is just an ordinary proven expression (which follows from the definition of ":".) which can also be used in proofs; for instance, when doing a non-trivial typing proof by hand.<br />
<br />
In order to express laws on sets and functions in the most general fashion, we will eventually need type variables. For a first prototype, however, it is sufficient to define a bunch "basic = number, [number], {number}, number→number, number→number→number" which is a type for most expressions we will ever encounter. Then we can write, for example ∀‹ A, B : basic |→ A,B = B,A ›.</div>
128.100.4.130
http://se.cs.toronto.edu/index.php/Solid_Intuition/Screen
Solid Intuition/Screen
2010-12-22T20:37:08Z
<p>128.100.4.130: </p>
<hr />
<div>The screen has principally three parts: the current focus expression on top (quite small), while the larger lower part is split in two: One side has expressions to replace the focus with, which are generated by matching laws from theories and context. The other side is a browser for laws and context.<br />
<br />
== the focus expression ==<br />
<br />
This shows the expression on which we are currently working. Initially it is the statement which we want to prove. After zooming in, it is a sub-expression of that statement and after applying some proof steps it is the current state of the proof.<br />
<br />
=== zoom ===<br />
<br />
We can focus in on a sub-expression by pointing the mouse on one side of the expression's top-most operator and turning the mouse-wheel once. For zooming out, turn the mouse-wheel the other way anywhere.<br />
In expression where the top-most operators are associative or continuous (such as "a+b+c" or "a≤b<c"), we have special zooming yet to be specified and a coloured expression background moving with the mouse pointer which always shows what the focus will be if we zoom in at that point.<br />
<br />
=== direction ===<br />
Direction operators can be: <br />
= : ≤ ⇒<br />
<br />
The operators exist only in one direction. Two directions can actually be used by placing the operator to the left or the right side of the focus.<br />
<br />
The initial direction of an expression to proof is "⇒ exp" with an implicit "T" that we are aiming for. Whenever we zoom into a monotonic operator, the direction stays the same. When zooming into an anti-monotonic operator, the direction switches sides. When zooming past a direction operator (= : ≤ ⇒) then this operator becomes the new direction (it is inverted if previously our direction was "exp ⇒").<br />
<br />
When proving a statement about strictly smaller (or proper subset), the program automatically keeps track of having at least one strictly smaller step in the chain. (Maybe it will display "<" as direction of proof as long as no such step has been executed.)<br />
<br />
== the "next step" list ==<br />
<br />
The program automatically matches the current focus expression with all the available laws. If one side of a law matches the focus, they are unified and the other side of the law is displayed as a potential next step. For this to happen, the top-most operator of a law has to be equality or correspond to the current direction of proof.<br />
Every item in the list is labelled with the name of the law that produced it and with the direction operator that the proof step would be. Selecting an item executes the proof step.<br />
<br />
Steps can be selected using the number keys on the keyboard. This way simple steps can be executed without moving the mouse out of the focus area.<br />
<br />
== the law browser ==<br />
<br />
The law browser has a line of tabs on top which switch between sets of laws to display. The most important tab is the "context" tab which shows all the local laws which are result of focusing deeper into an expression. This view helps the user understand how context laws work by observing them while zooming on different sub-expressions.<br />
<br />
The law browser can also be used to find a law that one would have expected to show up on the "next step" list and figure out why they are not there and what preparatory work is needed to make them applicable. Laws can be found by theory, by name and by pattern matching.</div>
128.100.4.130
http://se.cs.toronto.edu/index.php/Solid_Intuition/Milestones
Solid Intuition/Milestones
2010-12-22T20:37:07Z
<p>128.100.4.130: </p>
<hr />
<div>I'll separate the features of the proof editor into basically three groups:<br />
# The must-haves that define Version 1.0 which I plan to do as part of my Master's research; <br />
# should-haves which are optional, but very useful things. From this set I'll draw problems to be treated in my PhD research; and<br />
# the can-haves which are optional, but still interesting things. The proof editor will provide a framework in which to tackle those problems in, but they will probably too much for a single thesis. That means, other students and researcher can pick from here and do interesting work.<br />
<br />
To be clear about the last two points: I don't say that no-one else may do research on the number 2 problems or everyone else should do research on the number 3 problems. It's just my guess what I'll be able to cover in my thesis and what not.<br />
The first point is very important, because I think that's the basis for all of the other work. We need some kind of editor running to do anything which I'd like to do in my research. Furthermore, the basic editor will also help a lot in learning Programming Theory in exploring different variants of it, in doing larger, more realistic case-studies and last not least attracting new students interested in the subject.<br />
<br />
== Must-haves for Prototype 1 ==<br />
<br />
* a way to enter propositions and to prove them using the rules and laws in Ric's book, especially the context rules which are implemented by window inference.<br />
<br />
* symmetry, transitivity, reflexivity and perhaps distribution and other laws are built-in to the prover, which means that they can be performed directly without using a general rule and find the respective laws to apply. <br />
<br />
* a place listing all the laws with a very basic ability to search them (pattern matching, search by name).<br />
<br />
* a place listing all the theorems which have been proven so far. (Each instance of doing window inference will open an additional place that states all the local assumptions for that inference window.)<br />
<br />
* the logic (symbols, axioms and laws) provided inside of the prover must at least include:<br />
** booleans, bunches, functions and quantifiers to do calculations in predicate calculus. (Having all this, we'll also add ''sets'' with little effort).<br />
** numbers, strings, lists, assignment, dependent composition, ''ok'', recursion, var and frame to do programming, i.e. write specifications and refine them to programs. (Having all this, we'll also add ''characters'' and ''texts'' with little effort.)<br />
<br />
* propositions are entered using the keyboard, with some convenient way of entering symbols which are not on the standard keyboard. It will be taken into account that the operators of the logic have to be typed very often, even more often than most of the letters. So probably fixed keys should be assigned to the most important symbols and simple key-combinations to all other symbols. (Online-help for the mapping well be provided.)<br />
<br />
* type checking is needed to implement instantiation.<br />
<br />
* rules can be applied after selecting sub-expressions with the mouse and then select the rule from a menu. (Shortcut-keys are also available and shown in the menu.)<br />
<br />
* proofs are shown below their hypothesis (the proposition they're proving), where a green check mark behind the hypotheses indicates a complete proof; and a red cloud with the label "magic" on it indicates a gap (unfinished place in a proof).<br />
<br />
Since the programming (especially recursive refinements) needs some handling at the meta-level, we might have a working version 0.5 which is able to do some basic logic proofs, but does not yet include programming.<br />
<br />
This list seems much longer than the following two, but this is just because it is much more detailed, while the others invoke big areas with terse words. All of the sections will become more detailed as work progresses. More milestones will be added and completed milestones get the most detailed documentation.<br />
<br />
<br />
== Milestone 2 ==<br />
<br />
* the relation between specifications and programs is managed (somehow), including recursive refinements: a program is a specification that only uses executable operators and specifications that have been refined (somewhere)<br />
<br />
* programs can be run: the user enters values for the state variables and gets back values for the state variables<br />
<br />
<br />
== Additions of special interest to me ==<br />
<br />
* ways to store and retrieve theorems easily when they are needed<br />
<br />
* procedures, modules, abstract data types.<br />
<br />
* ways to divide and share the name space, so that large programs can be conveniently handled. (Models for this are Eiffel, B and LARCH.) Do this for programs and theories. <br />
<br />
* case studies: <br />
** library of generic algorithms and data structures<br />
** hardware verification<br />
** a CPU scheduler, memory manager, persistent storage facility, network interface (Q: how are those four called when they come together?)<br />
<br />
* Unified Algebra and case studies with advanced mathematics (i.e. more than the basic integer-algebra used in most of programming)<br />
<br />
== Additions in which others might be interested ==<br />
<br />
* calling other provers and using their results in a proof, with a justification that contains just the other prover's name<br />
<br />
* concurrency and interaction (some of it will already be needed for some of the case studies).<br />
<br />
* programming language: loops, assertions, functional programming<br />
<br />
* soundness proofs and other theoretical treatments</div>
128.100.4.130
http://se.cs.toronto.edu/index.php/Solid_Intuition/Literature
Solid Intuition/Literature
2010-12-22T20:37:06Z
<p>128.100.4.130: </p>
<hr />
<div>==User Interaction==<br />
<br />
===Pie Menus===<br />
<br />
* Callahan J., Hopkins D., Weiser M., Shneiderman B., An Empirical Comparison of Pie versus Linear Menus, Proceedings of CHI ‘88, (1988)<br />
: This one cites the following as "inventer" of pie menus:<br />
* Hopkins, D., Callahan, J., and Weiser, M. Pies: Implementation, Evaluation and Application of Circular Menus. University of Maryland Computer Science Department Technical Report, 1988. <br />
<br />
* Hopkins D., The Design and Implementation of the Pie Menus, Dr Dobb's Journal, December,<br />
(1991)<br />
<br />
* Martin S Dulberg, Robert St Amant & Luke S Zettlemoyer, An Imprecise Mouse Gesture for the Fast Activation of Controls, in: Angela Sasse and Chris Johnson (Editors), Human–Computer Interaction — INTERACT’99, IOS Press, 1999 ((c) IFIP TC.13)<br />
: Introduces the notion of a "flick" which is the use of a pie menu without actually showing the menu.<br />
<br />
* Shengdong Zhao, Ravin Balakrishnan, Simple vs. Compound Mark Hierarchical Marking Menus, ACM UIST Symposium on User Interface Software and Technology, 2004<br />
: Quote: ''Marking menus [6] are a variant of pie or radial menus [4, 12] that allow a user to perform a menu selection by either selecting an item from a popup radial menu, or by making a straight mark in the direction of the desired menu item without popping-up the menu. As with linear menus, marking menus can also support menu hierarchies, where users make “zig-zag” compound marks to select from multiple levels of submenus (Figure 1a). Extensive research [6, 7, 11] has shown that marking menus have significant advantages over regular pie/radial menus and linear menus, including support for seamless transition from novice to expert usage, and selection speeds up to 3.5 times faster.''<br />
<br />
* Tapia, M., & Kurtenbach, G. (1995). Some design refinements and principles on the appearance and behavior of marking menus. ACM UIST Symposium on User Interface Software and Technology. p. 189-195.<br />
<br />
* Kurtenbach, G., & Buxton, W. (1993). The limits of expert performance using hierarchical marking menus. ACM CHI Conference on Human Factors in Computing Systems. p. 35-42.<br />
<br />
* George Fitzmaurice, Azam Khan, Robert Pieké, Bill Buxton, Gordon Kurtenbach, Tracking menus, UIST, 2003 <br />
: Quote: ''We describe a new type of graphical user interface widget, known as a “tracking menu. ” A tracking menu consists of a cluster of graphical buttons, and as with traditional menus, the cursor can be moved within the menu to select and interact with items. However, unlike traditional menus, when the cursor hits the edge of the menu, the menu moves to continue tracking the cursor. Thus, the menu always stays under the cursor and close at hand. In this paper we define the behavior of tracking menus, show unique affordances of the widget, present a variety of examples, and discuss design characteristics.''<br />
<br />
==Specifications==<br />
<br />
* Richard S. Bird: Small Specification Exercises<br />
Intro: Sometimes it seems easier to construct a program of a problem than a suitable specification of the problem.<br />
<br />
==Refinement Tools==<br />
<br />
* ''A Review of Existing Refinement Tools''[http://citeseer.ist.psu.edu/carrington94review.html] David Carrington, Ian Hayes, et al., 1994<br />
<br />
:This paper compares two tools which work on the Refinement Calculus as defined by Carroll Morgan. Refinement rules are applied by the tools and the proof obligations are left to be done by a prover. Integration of prover and refinement tool is only discussed abstractly as a wish for the future.<br />
<br />
==Provers==<br />
<br />
* YET_TO_READ: Manfred Kerber: "Proof Planning: A Practical Approach to Mechanized Reasoning in Mathemtics"<br />
The attempt to mechanize mathematical reasoning belongs to the first experiments in artificial intelligence in the 1950 (Newell et al., 1957). However, the idea to automate or to support deduction turned out to be harder than originally expected. This can not at least be seen in the multitude of approaches that were pursued to model different aspects of mathematical reasoning. There are different dimension according to which these systems can be classified: input language (e.g., order-sorted first-order logic), calculus (e.g., resolution), interaction level (e.g., batch mode), proof output (e.g., refutation graph), and the purpose (e.g., automated theorem proving) as well as many more subtle points concerning the fine tuning of the proof search.<br />
<br />
In this contribution the proof planning approach will be presented. Since it is not the mainstream approach to mechanized reasoning, it seems to be worth to look at it in a more principled way and to contrast it to other approaches. In order to do so I categorize the systems according to their purpose. The different attempts can be roughly categorized into three classes: machine-oriented automated theorem provers, interactive proof checkers, and human-oriented theorem provers. In the rest of the introduction we shall take a closerlook at these different categories.<br />
<br />
* PVS: A Prototype Verification System, (1992 paper)<br />
Quote: ''Although we are reasonably satisfied that PVS is an effective tool for developing readable specifications and formal proofs with considerable human efficiency, we are sill significantly short of our goal of employing mechanization to produce proofs that humans find truly perspicuous.''<br />
<br />
* Notes on PVS from a HOL perspective (Mike Gordon, 1995)<br />
:''PVS is a tightly integrated and coherent system that is easy and fun to use. Compared with HOL it is much simpler to learn and (at least for beginners) much quicker at proving theorems.''<br />
:''There is an ‘engineering philosophy’ associated with PVS that emphasizes getting useful work done. HOL, perhaps because of its academic (ivory tower) origins, has historically placed more emphasis on logical foundations and less on usability.''<br />
<br />
* [http://www.lemma-one.com/ProofPower/index/ ProofPower]: a HOL88 based prover with support for Z. Typical X window and latex interface.<br />
<br />
==Logic and Presentation==<br />
===Leslie Lamport: How to Write a Proof, 1993===<br />
<br />
Advocates semi-formal natural-deduction based proof with a nice abbreviation for proposition-numbers. 3.1.1.1.2 becomes <4>2.<br />
<br />
<blockquote><br />
''"I have also found structured proofs very helpful when I need a variant of an existing theorem, perhaps with a slightly weaker hypothesis. In a properly written proof, where every use of an assumption or a proof step is explicit, simple text searching reveals exactly where every hypothesis is used."''<br />
</blockquote><br />
<br />
==History of Logic==<br />
<br />
* [http://www.math.bas.bg/~vlsot Vladimir Sotirov] (Sofia): ''Leibniz’s “Calculemus!” at Work''<br />
:Quote from Leibniz: ''"La maniere d'enoncer vulgaire regarde plustost les individus, mais celle d'Aristote a plus d'egard aux idées ou universaux. Car disant ‘tout homme est animal’, je veux dire que tous les hommes sont compris dans tous les animaux; mais j'entends en même temps que l'idée de l'animal est comprise dans l'idée de l'homme. L'animal comprend plus d'individus que l'homme, mais l'homme comprend plus d'idées ou plus de formalités; l'un a plus d'exemples, l'autre plus de degrés de realité; l'un a plus d'extension, l'autre plus d'intension."''<br />
:I like to replace homme/animal with bool/number and thus obtain a statement that talks about Rick's Unified Algebra.<br />
<br />
* Origins of Types: http://www.rbjones.com/rbjpub/logic/cl/cl010.htm</div>
128.100.4.130
http://se.cs.toronto.edu/index.php/Solid_Intuition/Features
Solid Intuition/Features
2010-12-22T20:37:04Z
<p>128.100.4.130: </p>
<hr />
<div>A draft list of the features that need to be implemented for SI Milestone 1.<br />
<br />
===General Design===<br />
<br />
The most important difference to existing graphical math editors is that there is a set of primitive proof-step generating rules which basically only let us apply axioms and laws which are formulated in the expression language themselves. Every law that we prove therefore extends our portfolio in possible proof steps. Axioms can be added just be typing them in as an expression.<br />
<br />
The screen is divided into a part the shows the current proof (attempt) and another part that shows laws (and axioms). Selecting a (sub)expression in the proof will automatically show all the matching laws on the other side of the screen and sort them by closeness of match. The law can then be applied by one single click.<br />
<br />
===Editor part===<br />
<br />
* Data model for Expressions.<br />
* Saving and loading of Expressions from the data model to external files and back.<br />
* Rendering of Expressions on Screen (in M1 with no frills: the output is an unmodified text window).<br />
* Entering of expressions.<br />
* Keeping a collections of theorems (each of them being an expression), stating the type of free variables in the theorems, loading and saving the theorems (probably with a set of laws loaded automatically).<br />
* Transparent Parsing: <br />
** when the cursor (or mouse) is near an operator, the editor will show the implicit parenthesis around that operators sub-expression. Example: 1 + 2/3. Touching + shows as (1 + 2/3), while touching / shows as 1 + (2/3). <br />
** when the cursor (or mouse) is near an associative or continuing operator then the status line will show the word "associative" or "continuing" and highlight all other operators on the same level of associativity or continuation.<br />
* Zoom: any line from a proof can be hidden using the scroll-wheel of the mouse. The hints before and after that line will then be shown together on one line. Optionally we could also zoom out some of the hints, so that only the most interesting hint, or none, is left.<br />
<br />
===Prover part===<br />
<br />
* ''Unification'' of expressions with free variables, taking into account associativity and symmetry. (Associativity is trivial and symmetry might take some backtracking, but is still simple enough.)<br />
<br />
* Data model for proofs consisting of<br />
** an expression that is the hypothesis (which contains all prerequisites which are not theorems as antecedents of an implication)<br />
** the name of the proof format of which there are two: <br />
**# if the hypothesis is of the form X <> Y, where <> is a transitive relation (e.g. equality, implication, greater/smaller than, ...), the proof can have the form X <> ... <> Y. We'll call this ''Proof by transitivity''.<br />
**# in all other cases for hypothesis H the proof will have the form H <= ... <= Th, where <= is implication and Th is a theorem. We'll call this ''Proof by reduction to a theorem''.<br />
** note that in our current working hypothesis, we do not need special proof formats like "by case distinction" or "by induction", neither are there special proof rules for this. It is just done by applying the ''case law'' ('''if'''-expression) and the ''induction law'', which are both ordinary theorems. <br />
** the proof itself as an expression with hints.<br />
<br />
* Application of rules. (Every rule can be applied forward or backwards, depending on whether the user selected the expression before or after the "..." in the proof.)<br />
** instantiation of free variables and instantiation of functions (besides equivalence and implication, functions are the only language elements that always need to be hard-wired).<br />
** substitution of equals for equals (this is also covered by the "instance rule" of aPToP). This means <br />
**#given a theorem of the form "a=b", obtain a new line in the proof expression, where the previous line has an "a" and the new line has a "b" instead, the lines are connected by "=".<br />
**#given any theorem, whose instance appears in the proof expression, replace it with "T" and connect the lines with "=".<br />
** substitution with monotonicity (i.e. weakening/strengthening). Same as above only substitute "==>" for "=". Antimonotonicity automatically changes it to "<==".<br />
** substitution with context (i.e. window inference)<br />
** equality substitution with one of the laws of inclusion, i.e. applying the law a==>b can be used to replace a by "a and b" or to replace b by "b or a". (same for <= and min/max and for : and ,')<br />
** arithmetic, which evaluation of expressions containing only numbers and arithmetic operators (+, -, *, /, power). To simplify expressions with variables, the laws of arithmetic have to be used manually.<br />
* laws can also be applied to expressions in the context (or even in global laws), the resulting new expression is then immediately added as a new context law (this emulates reasoning which is often implicit in calculational proofs, e.g. in discharging antecedents of laws)<br />
<br />
Note that substitution is the principal way of applying laws: either the law is an equality or implication, in which case one side of the law is unified with a sub term and replaced by the other side, or the law itself is unified with the sub term and replaced by T.<br />
<br />
The following rules could be automatically generated from theorems, but will probably be hard-coded in M1:<br />
* permissible relations to be used in place of <> in a proof by transitivity<br />
* symmetric, associative operators<br />
* monotonicity and antimonotonicity in the generalised substitution rule<br />
* generation of local hypothesis in window inference<br />
<br />
Robinson's Ergo Prover also had window-inference hard-coded in the first version and later made it generic.</div>
128.100.4.130
http://se.cs.toronto.edu/index.php/Solid_Intuition/Differences
Solid Intuition/Differences
2010-12-22T20:37:03Z
<p>128.100.4.130: </p>
<hr />
<div>When doing something that has apparently already been done often and seems to be well understood, it is helpful to look at what one is doing different than others. The question is: will the differences in the approach lead to substantial differences in the outcome? Or will one just produce a minor variant on a worn-out theme?<br />
<br />
In this page I will argue, that although the differences between Solid Intuition are very simple and easy to explain, they are very deep and will lead to some significant differences. A lot of known problems do not exist in this approach and many new problems will show up.<br />
<br />
I will sort the differences into some categories and start with the Goals / Objectives of Solid Intuition vs existing provers, since this is the root of many other differences.<br />
<br />
Here is a summary:<br />
# Solid Intuition focuses first and foremost on the User Interface and the User Experience. The logical and mathematical aspects are subordinate to this.<br />
# Proofs as we construct them are primarily a means of communication between humans. A proof shows not only ''that'' something is true, but also ''why'' it is true. (For example, in programming a proof justifies that all the parts of a program are necessary, when they are all needed in the proof. And we can also see what part of the program is needed in what part of the proof. Our conjecture is that this permits a degree of understanding to the readers and authors of proofs which makes it easier to change and adapt programs, but also to develop solutions to similar and also more complex problems.)<br />
# The logical foundations and the theory of programming that we are using are the most simple and practical that have been proposed to date.<br />
<br />
<br />
== Table of Differences ==<br />
<br />
{| class="wikitable"<br />
|- <br />
! prover !! proof editor<br />
|- <br />
| powerful || easy to use<br />
|- <br />
| automatic || manual<br />
|- <br />
| has a meta-logic / core logic || has only one logic<br />
|- <br />
| has a minimal set of rules to allow for different logics to be implemented by changing the axioms<br />
| has a redundant set of rules to make proofs easier and the tool more consistent<br />
|- <br />
| ''to proof'' is understood as ''to discover if it's a theorem'' <br />
| ''to proof'' is understood as ''to make transparent to anyone, that it is a theorem''<br />
|- <br />
| designed as a software development tool. Later also used for teaching purposes.<br />
| designed as a teaching tool. Maybe also usable for industrial software development. (Models are the Pascal and Smalltalk programming languages.)<br />
|}<br />
<br />
==An analogy==<br />
<br />
The proof script that results from proving a theorem with a software theorem prover resembles a detailed journal of all the things a user had to do in order to make the computer accept the proof. It contains all the gory technical details, omitting nothing. Like a researcher's working recorded on video.<br />
<br />
The proof editor, on the other hand, intends to show the ''result'' of the proving process. It is the essence of the researcher's work as it would be published, for example, in a journal article. The only technical details contained in the presentation are those that the author has chosen to present. <br />
<br />
== Goals / Objectives ==<br />
<br />
The name ''prover'' used by HOL, PVS, Isabelle and others implies that these tools have been designed to proof, they are machines that proof automatically, while being directed by a human programmer. I say ''programmer'' instead of ''user'' because that person does not directly enter proofs, but rather she has to write tactics and scripts, which are nothing else but programs written in a very high-level language which is specialised to direct a prover in proving. As a consequence, the user must not only learn the language in which propositions are written and the language in which proofs are written, but additionally also the language in which proof-scripts are written.<br />
<br />
On the other hand, the name ''proof editor'' which we use for Solid Intuition implies that the program is merely a helper for the user to do his proofs. The important difference is that the ''proof editor'' tries to speak the language of the user, while a ''prover'' usually imposes his language and his way of working on the user. The language used by Solid Intuition is clearly designed to correspond to what people use when they have to do the work on their own. <br />
<br />
=== Application area ===<br />
<br />
An important advice if one wants to be successful: specialize! We implement that advice by tailoring our tool to one kind of programs (or rather, one kind of programming): the development of general algorithms. The basic algorithms of computer science (searching, sorting, numerics, memory allocation, ...) which are taught to all students and which are still being reimplemented from time to time (not as often as they used to be, since function libraries have finally made their breakthrough in industry). An area that still needs to implement its own basic algorithms is high-safety programming or high-security programming, and occasionally also high-efficiency programming.<br />
<br />
== Underlying Logic ==<br />
<br />
Most other provers use some kind of sequent calculus as a way to manage the local theorems and current hypothesis in proofs. But since the operators of the sequent calculus mirror those of boolean algebra, an additional layer of complexity is added. Since we are using equational proofs and context rules, we don't need need an additional layer. Boolean Algebra suffices for everything. Likewise, many formal systems try to develop very general theories, for example they might start with the axioms for sets (as Abrial does in his "B book") and then construct the integers via sets. Comparably, introducing both integers and sets with their own axioms is much easier. Everything about integers can then be proven by just referring to integer axioms. Finally, the same approach holds also for specifications of software modules: we can try to model them via sets, as the Z notation does, or we can just give their laws as algebraic specifications (as in LARCH). Of course, when a software module has a more complex state, then an indirect specification becomes necessary. But still, we have an advantage of specifying everything as simple and straightforward as possible.<br />
<br />
A similar argument applies to our formalisation of programs. A specification is just a boolean expression that describes the relation between inputs and outputs. A program also describes this relation, but additionally is executable by a computer. The refinement relation between programs and specifications is just boolean implication. This way, we don't need any mechanisms to extract proof obligations. The refinement statement itself is what we proof. For a comparison between this approach and what other people use, see Ric's paper [http://www.cs.toronto.edu/~hehner/wrong.pdf What's wrong with formal programming methods?] (1991).<br />
<br />
The idea to have one simple set of "core rules", on which everything else can be implemented and any logic been modelled, appeals to almost everyone. Researchers can't resist the generality that is implied by this approach. But in most cases, only one logic will be implemented inside the prover's meta-logic (or core-logic). For example in Isabelle, HOL is the embedded logic used for most of the applications. But in that case, the core-logic is just an additional layer in the software, that has to be understood and put into use.<br />
<br />
In our approach we will try to resist that temptation of being too general and instead focus on just the proof rules and logic that is needed for the refinement of programs.<br />
<br />
== Development Methodology ==<br />
<br />
Other provers are usually implemented in functional and/or logical programming languages. This makes the development of inference algorithms easier, but the design of a good user interface harder. We'll do it the other way round: since the User Interface is the most important thing, we'll use a language, tools and methods that best support the design and implementation of good user interfaces. <br />
<br />
Furthermore we'll make use of the current state of the art in software engineering regarding design by contract, documentation, tests, collaboration and bug/feature management.<br />
<br />
One of the consequences of this is that we can make up for not having a meta-logic / core-logic by creating a good contract-/interface-based design of the program and well isolating the inference rules from the rest of the program. Then the part of the editor that needs to be validated to ensure that the system is sound will be clearly separated from all the fancy user interface functions. Other provers use a text-based interface between the graphical part and the underlying proof engine. We claim, that a well-documented contract-based OO or function call interface can do the same job even better.</div>
128.100.4.130
http://se.cs.toronto.edu/index.php/Solid_Intuition/Debt
Solid Intuition/Debt
2010-12-22T20:37:02Z
<p>128.100.4.130: </p>
<hr />
<div>We have assumed everywhere that unification is done automatically in the background. Unification instantiates forall-quantified variables, thereby doing scope applications (function applications). Side condition of those applications is that the function argument is in the function's domain. Thus we have to prove this condition for every instantiation done. I'll call such an addition proof-goal a "debt".<br />
<br />
Here's what the user should be able to do with his debt:<br />
* eyeball-check that a statement is provable before pushing it into debt<br />
* at every point be able to choose between continue proving along the main thread or working off some debt<br />
* keep track of all the context that belongs to every item of debt<br />
* be able to find and apply proof steps that have preconditions, e.g. for "a ⇒ x=y": replace x by z and make a debt</div>
128.100.4.130
http://se.cs.toronto.edu/index.php/Solid_Intuition/Components
Solid Intuition/Components
2010-12-22T20:37:01Z
<p>128.100.4.130: </p>
<hr />
<div>#REDIRECT [[Solid Intuition/Features]]</div>
128.100.4.130
http://se.cs.toronto.edu/index.php/Solid_Intuition/AC
Solid Intuition/AC
2010-12-22T20:37:00Z
<p>128.100.4.130: </p>
<hr />
<div>=Associativity and Commutativity (Symmetry)=<br />
<br />
==Status Quo==<br />
<br />
When doing formal proofs on paper, the associativity of operators is usually taken into account by simply dropping the parenthesis around expressions. This makes reasoning intuitively easy while still being fully formal. The form a+b+c can be seen as normal formal for expressions (a+b)+c and a+(b+c). <br />
<br />
Symmetry on the other does not have a normal form and is usually handled on paper by just applying the law a+b=b+a implicitly without mentioning it explicitly.<br />
<br />
==Handling in the proof tool: first attempt==<br />
<br />
Both issues can be handled in a very straight-forward way which does not require much extra work in implementing and explaining the tool: for associativity we can always use the normal form; for symmetry we can have the law of symmetry shown as first match (when it applies) and be invoked with a simple touch of key "1" (for "apply first matching step"). When viewing the full proof, symmetry steps (like other "minor" steps) can simply be hidden from the presentation using the mechanism which is provided for hiding trivial steps anyway.<br />
<br />
==Feature Interaction==<br />
<br />
The straight-forward way of handling associativity conflicts with our specification of zoom in which every expression has a top-most operand and we can zoom into either argument of it. <br />
Now we could see an associative operator as an n-ary operator and zoom directly into the subexpressions below, i.e. from a+b+c zoom directly to a, b, or c without passing via either a+b or b+c.<br />
But the problem is that for applying laws we sometimes just 'want' to focus on a+b or b+c. A simple example being the application of the symmetry law.<br />
Another problem is unification: for example, applying the symmetry law to a+b+c could lead to either b+c+a or c+b+a, depending on which association is chosen.<br />
<br />
==Compromise for first Prototype==<br />
<br />
Both associativity and symmetry have to be explicitly invoked via the standard mechanism of making a proof step. This makes best use of existing functionality and avoids feature interaction. A helpful addition is to display superfluous parenthesis in grey or only when the mouse hovers them; this ensures that a unique tree structure is preserved, no feature interaction occurs, but the screen looks still quite clear.<br />
A better mechanism for AC can be thought-out once the basics of the tool are working and can be played around with.<br />
<br />
The following example shows that this is not much of a loss. In the expression "a+b+c" we can either zoom on "a" or "b+c" or we can zoom on "a+b" or "c". Since zoom is done on the simple click of a button "left" or "right" we need some extra user-action to choose which parenthesizing we want to assume. Given that application of a law is also done with a simple click of a button this action can as well be the choice of the Associativity Law. This has the advantage of using two simple separate gestures and interaction with other features is immediately clear. (For instance, it takes three small gestures to zoom on "b" but then there is no doubt whether zooming out leads to "a+b" or "b+c" because one of them was an intermediate step and that's what we also pass on our way back.)</div>
128.100.4.130
http://se.cs.toronto.edu/index.php/Solid_Intuition
Solid Intuition
2010-12-22T20:36:58Z
<p>128.100.4.130: </p>
<hr />
<div>The '''Solid Intuition/''' name space contains notes for [[Robert Will|Robert]]'s research project for which he keeps a tentative description in [[user:robertw/Solid Intuition]].</div>
128.100.4.130
http://se.cs.toronto.edu/index.php/SofterReadingGroup
SofterReadingGroup
2010-12-22T20:36:57Z
<p>128.100.4.130: </p>
<hr />
<div>Monday July 16th<br />
1 - 3 pm<br />
BA7172<br />
<br />
----<br />
<br />
*Dealing with "Map Shock": A Systematic Approach for Managing Complexity in Requirements Modelling<br />
<br />
Moody, D.<br />
<br />
Recommended by Sotirios<br />
----<br />
Monday June 25<br />
1-3 7172<br />
<br />
* What Makes a Good Diagram? Improving the Cognitive Effectiveness of Diagrams in IS Development</a><br />
<br />
Moody, D.<br />
<br />
----<br />
<br />
Recommended by Sotirios<br />
<br />
Monday June 11<br />
1:00 to 3:00<br />
BA7172<br />
<br />
!! Program comprehension (NE)<br />
Most of the work here looks at how software developers model and make sense of source code and designs. Related to reverse engineering research.<br />
<br />
* ''Cognitive Design Elements to Support the Construction of a Mental Model During Software Exploration'' - Storey, Fracchia, Mueller -- http://www.bibsonomy.org/bibtex/2b3b53638f43625f9b5c6ed871adc5bbf/neilernst<br />
<br />
----<br />
Tuesday May 1<br />
1:30 - 3:30<br />
BA7321<br />
<br />
* Program comprehension (NE)<br />
Most of the work here looks at how software developers model and make sense of source code and designs. Related to reverse engineering research.<br />
* ''How do program understanding tools affect how programmers understand programs?'' - by Storey, Wong and Mueller -- http://dx.doi.org/10.1016/S0167-6423(99)00036-2<br />
<br />
----<br />
Tuesday April 3rd<br />
1:30 to 3:30<br />
BA4290<br />
!!Cognitive engineering (suggested by NE)<br />
This field is concerned with complex systems (planes, plants, military) and making them fit for human use - to reduce operator error, promote safety, etc.<br />
* ''Cognitive Systems Engineering'' by Hollnagel and Woods - http://www.bibsonomy.org/bibtex/2227d97355aec8c8002d7b325ad95a142/neilernst<br />
<br />
----<br />
Tuesday March 20th,<br />
2 to 3:30 <br />
BA7172<br />
<br />
* ''Skills, Rules and Knowledge - Human performance monitoring'' by Jens Rasmussen - classic description of how systems should be designed to account for human capabilities - http://www.bibsonomy.org/bibtex/23e0fc7b962143a2b724bf7243fc4050a/neilernst<br />
---<br />
<br />
Tuesday March 6th<br />
1:30 to 3:30<br />
BA4290<br />
<br />
* [http://www.cs.toronto.edu/~jenhork/Papers/EvaluationExperiment_and_Appendix.pdf | Jennifer's Draft paper for the Empirical SE Course]<br />
Trying to test qualities about mental and physical models before/after i* EvaluationTuesday February 20th<br />
----<br />
<br />
1:30 to 3:30<br />
BA7172<br />
<br />
* A great philosphically oriented analysis of abstraction and modeling by Peter B. Ladkin [http://www.rvs.uni-bielefeld.de/publications/Reports/Abstraction.html | Abstraction and Modelling]. <br />
<br />
Here is an excerpt from the abstract: Engineers talk of abstractions and models. I define both, consistently with the way that engineers, computer scientists, and formal logicians use the terms. I propose that both abstractions and models have what Searle calls an agentive function, namely that both models and abstractions are models/abstractions of something, for a purpose. It follows that both abstraction and modelling are most fruitfully thought of as ternary relations (or, when the purpose is forgotten, binary).<br />
----<br />
<br />
Tues Jan. 23rd<br />
<br />
* This is a nice paper on mental models by Gottfried Vosgerau [http://www.uni-tuebingen.de/selbstbewusstsein/staff/vosgerau/MM_vosgerau.pdf#search=%22the%20perceptual%20nature%20of%20mental%20models%22 | The Perceptual Nature of Mental Models] <br />
<br />
[MentalModelNotes]<br />
----<br />
<br />
Although it is focussed on mental models I think it sheds a lot of light on how and why we use concrete models (e.g. i* models).<br />
Tues. Dec. 12th<br />
12 to 2<br />
BA5256<br />
<br />
<br />
* Cognitive dimensions of notations<br />
Proceedings of the fifth conference of the British Computer Society, Human-Computer Interaction Specialist Group on People and computers V, : 443--460, 1989. <br />
<br />
http://www.cl.cam.ac.uk/users/afb21/CognitiveDimensions/papers/Green1989.pdf<br />
----<br />
<br />
Nov 14, 2006<br />
<br />
* [http://www.cacs.louisiana.edu/~walenste/pubs/2002-sv-dsvis-walenstein.pdf | Foundations of Cognitive Support: Toward Abstract Patterns of Usefulness]<br />
He defines a theory of of how tools (including models) can be deemed useful to our thinking (including modeling) processes. The theory is based on distributed cognition.<br />
<br />
[CognitiveSupportNotes]<br />
<br />
----<br />
<br />
Oct 20, 2006<br />
<br />
* J.D. Lee and K.A. See [http://www.engineering.uiowa.edu/~csl/publications/pdf/leesee04.pdf | Trust in Automation: Designing for Appropriate Reliance]<br />
This appeared in the following journal: HUMAN FACTORS, Vol. 46, No. 1, Spring 2004, pp. 50–80.<br />
<br />
[TrustInAutomation]<br />
<br />
----<br />
Oct 6, 2006<br />
* Cognitive Bias or Judgmental Bias<br />
<br />
(Jorge writes:) Although the classic text on these topics is the book ''Judgment under Uncertainty: Heuristics and Biases'', by Kahneman and Tversky, I haven't been able to find a summary of it for us. They seem to have published the essentials in an article in Science, but it's old and U of T doesn't have access to it.<br />
<br />
So, instead, I propose we read a paper on the economical irrationality of humans, also by Tversky and Kahneman. They show how our decisions do not make sense from a purely game theoretical perspective, and analyze which other factors seem relevant for humans when making decisions.<br />
<br />
The paper is [http://www.jstor.org/view/00219398/di993851/99p02752/0 | Rational Choice and the Framing of Decisions], published in The Journal of Business. The link to software engineering might be very weak, although I could argue that these considerations are elemental for a proper conceptualization of goal analyses.<br />
----<br />
Sept 22, 2006<br />
* Sterman 91 a Skeptic's Guide to computer models<br />
http://sysdyn.clexchange.org/sdep/Roadmaps/RM9/D-4101-1.pdf <br />
<br />
Sterman's own work is on system dynamics with qualitative and quantitative modeling. This paper discusses various kinds of dynamics modeling - simulation, optimization, econometric. Relevant when we consider what i* model evaluation can/should do.<br />
<br />
I think this paper may also raise questions about what's hard and and what's soft.<br />
---<br />
Aug 18, 2006<br />
* Checkland's Soft Systems Methodology<br />
Checkland was a pioneer in introducing a "softer" approach to IS development. The main reference is a book. Checkland, Peter (1981). Systems Thinking, Systems Practice. London, John Wiley & Sons. <br />
<br />
This retrospective article seems to give a summary.<br />
http://www3.interscience.wiley.com/cgi-bin/fulltext/75502921/PDFSTART<br />
http://www3.interscience.wiley.com/cgi-bin/abstract/75502921/ABSTRACT?CRETRY=1&SRETRY=0<br />
Soft systems methodology: a thirty year retrospective<br />
Peter Checkland. <br />
''Systems Research and Behavioral Science''<br />
Volume 17, Issue S1 , Pages S11 - S58<br />
Supplement: Peter Checkland at 70: A Review of Soft Systems Thinking<br />
Copyright © 2000 John Wiley & Sons, Ltd.<br />
Reproduced from Soft Systems Methodology in Action, John Wiley & Sons, Ltd, Chichester, 1999.<br />
<br />
see also http://en.wikipedia.org/wiki/Soft_systems <br />
<br />
---<br />
Aug 6, 2006 <br />
* Shum Hammond 94 IJHCS Argumentation-based design rationale: what use at what cost?<br />
http://dx.doi.org/10.1006/ijhc.1994.1029<br />
---<br />
July 24, 2006<br />
* Scaife, Rogers - External cognition: how do graphical representations work?<br />
http://dx.doi.org/10.1006/ijhc.1996.0048<br />
---<br />
July 10, 2006<br />
* David Kirsh, When is Information Explicitly Represented? Chapter 12 of "Information, Language and Cognition"<br />
---<br />
June 22, 2006<br />
* Yvonne Rogers, Distributed Cognition and Communication, Encyclopedia of Language and Linguistics (2nd ed.)</div>
128.100.4.130
http://se.cs.toronto.edu/index.php/Slicing_of_Model_Collections
Slicing of Model Collections
2010-12-22T20:36:56Z
<p>128.100.4.130: </p>
<hr />
<div>== Overview ==<br />
Efficient analysis and transformation of inter-related models critically depends on the ability to identify and extract model slices that are relevant to the development task at hand, while accounting for the complex relationships between the models. For example, in a large system, to verify that a given requirement is indeed satisfied by the system’s design, it is often necessary to extract slices of multiple inter-related design models (e.g. class, state, sequence, activity diagrams) that contribute to the satisfaction of the requirement. In addition, slicing is a prerequisite for the use of model operations because an operation typically takes only a part of a model collection as input.<br />
For human-centric tasks such as inspections, slicing helps reduce cognitive load, thus improving comprehension and further making it less likely that important issues will be missed. For automated tasks such as inconsistency detection and resolution (see Activity 1.2), slicing can lead to substantial improvements in scalability by reducing the size of the state space, thus making computationally-expensive techniques such as search applicable to large systems.<br />
== Planned steps and method == <br />
Our research on slicing of model collections will build directly on the notions of model relationships identified and formalized in Activity 3.1. We are going to augment these relationships with domain-specific information to remove ambiguities in the relations and to deduce finer-grained relations between model elements. These augmented relationships will then be used for devising formal algorithms to automatically extract minimized slices of a collection of models relevant to a particular task. We will evaluate the usefulness and practical utility of our algorithms on the NECSIS Exemplars. <br />
Slices of model collections are often subject to various quality criteria. In particular, (1) the slices have to be consistent with the source models, e.g., for behavioural models, the temporal ordering of actions in the source models should be preserved by the slices; (2) the slices have to be consistent with one another, e.g. a UML sequence diagram slice cannot refer to class methods that do not appear in the corresponding UML class diagram slice; (3) the slices have to include sufficient information from the original (non-sliced) models to enable conclusive analysis and eliminate the need to referring back to the original models for verification and validation tasks. Elaborating these quality criteria and ensuring that they are met by our slicing algorithms will be an integral part of our research. In year 3 of the project, we will integrate our efforts with Task-Based Model Slicing (Theme 1).<br />
== Advancement of the state of the art ==<br />
Most of the existing work on slicing is concerned with program code. In this context, slicing is used primarily as an aid for debugging, motivated by the observation that a slice corresponds to the mental abstractions that developers make while debugging [Wei81]. More recently, slicing has been studied for models as well, and various model slicing techniques have been proposed, e.g. [KSH+03] provides a technique for the slicing of state-based models using dependence analysis, and [KMS05, SCW+10] techniques for slicing UML class models based on constraint annotations in the models. These approaches focus exclusively on slicing of individual models and do not address slicing of model collections which will be the focus of the research we propose here.<br />
<br />
== References ==<br />
<br />
[AHL+09] Adam Antonik, Michael Huth, Kim Guldstrand Larsen, Ulrik Nyman, and Andrzej<br />
Wasowski. Exptime-complete decision problems for modal and mixed specifications. Electr. Notes Theor. Comput. Sci., 242(1):19--33, 2009.<br />
<br />
[BKLS09] Nikola Benes, Jan Kretnsky, Kim Guldstrand Larsen, and Jiri Srba. Checking thorough refinement on modal transition systems is Exptime-complete. In ICTAC, pages 112--126, 2009.<br />
<br />
[Ber03] Bernstein, P.: Applying Model Management to Classical Meta Data Problems. In Proc. Conf. on Innovative Database Research, pp. 209-220, 2003.<br />
<br />
[BCE+06] G. Brunet, M. Chechik, S. Easterbrook, S. Nejati, N. Niu, M. Sabetzadeh. A Manifesto for Model Merging. In Proceedings of ICSE’06 International Workshop on Global Integrated Model Management (GAMMA'06), May 2006.<br />
<br />
[BLO+06] L.C. Briand, Y. Labiche, L. O'Sullivan, and MM Sówka. Automated impact analysisof UML models. Journal of Systems and Software, 79(3):339--352, 2006.<br />
<br />
[CLN+09] M. Chechik, W. Lai, S. Nejati, J. Cabot, Z. Diskin, S. Easterbrook, M. Sabetzadeh, R. Salay. Relationship-Based Change Propagation: A Case Study. In Proceedings of ICSE’09 International Workshop on Modeling in Software Engineering, 2009.<br />
<br />
[DBV06] Del Fabro, M., Bezivin, J. and Valduriez, P.:Weaving models with the Eclipse AMW plugin. In Proc. of Eclipse Modeling Symposium, Eclipse Summit Europe (2006).<br />
<br />
[Eg07] A. Egyed. "Fixing Inconsistencies in UML Design Models". In Proceedings of 29th International Conference on Software Engineering (ICSE’07), pages 297-301, Minneapolis, MN, USA, May 2007.<br />
<br />
[Eg10] A. Egyed. "Automatically Detecting and Tracking Inconsistencies in Software Design Models". In IEEE Transactions on Software Engineering, Preprint, March 2010.<br />
<br />
[ELF08] A. Egyed, E. Letier, and A. Finkelstein. Generating and Evaluating Choices for Fixing Inconsistencies in UML Design Models. In Proceedings of ASE'08, pages 99--108. IEEE Computer Society, 2008.<br />
<br />
[EMP10] The Eclipse Modeling Platform whitepaper. http://wiki.eclipse.org/images/7/7e/Modeling_Platform_v1_-_Feb_17.pdf, 2010.<br />
<br />
[FBDCU10] D. Fischbein, G. Brunet, N D’Ippolito, M. Chechik, S. Uchitel. Weak Alphabet Merging of Partial Behaviour Models. ACM Transactions on Software Engineering and Methodology (TOSEM), 2010, 49 pages. In press.<br />
<br />
[JVB+10] Jouault, F., Vanhooff, B., Brunelière, H., Doux, G., Berbers, Y., and Bezivin, J.: Inter- DSL Traceability and Navigability Support by Combining Megamodeling and Model Weaving. Special Track on the Coordination Models, Languages and Applications at SAC 2010, March 2010. <br />
<br />
[KMS05] H. Kagdi, J. Maletic, A. Sutton, Context-free slicing of UML class models. In Proceedings of 21st IEEE International Conference on Software Maintenance (ICSM'05), 2005, pp. 635–638.<br />
<br />
[KPP08] Kolovos, D.S. and Paige, R.F. and Polack, F.A.C.: Novel features in languages of the epsilon model management platform. In Proceedings of the 2008 international workshop on Models in software engineering, pp. 69—73, 2008.<br />
<br />
[KS03] Kalfoglou, Y. and Schorlemmer, M.: Ontology mapping: the state of the art. The Knowledge Engineering Review 18(1) pp. 1-31. 2003<br />
<br />
[KSH+03] B. Korel, I. Singh, L. Ho Tahat, B. Vaysburg, Slicing of state-based models. In Proceedings of 19th International Conference on Software Maintenance (ICSM'03), 2003, pp. 34–43.<br />
<br />
[LT88] Kim Guldstrand Larsen and Bent Thomsen. A modal process logic. In Proceedings of the 5th Annual IEEE Symposium on Logic in Computer Science (LICS'88), pages 203--210, 1988.<br />
<br />
[MMW02] Mens, K., Mens, T. and Wermelinger, M.: Maintaining software through intentional source-code views. In proc. of the 14th International Conference on Software engineering and knowledge engineering (2002)<br />
<br />
[MV06] T. Mens, R. Van Der Straeten, M. D'Hondt. "Detecting and Resolving Model Inconsistencies Using Transformation Dependency Analysis". In Proceedings of 9th International Conference on Model Driven Engineering Languages and Systems (MoDELS’06), pages 200-214, Genova, Italy, October 2006.<br />
<br />
[Mug10] Muggleton, S.H.: webpage on Inductive Logic Programming. http://www.doc.ic.ac.uk/~shm/ilp.html (2010).<br />
<br />
[NE01] B. Nuseibeh, S. Easterbrook, and A. Russo. "Making Inconsistency Respectable in Software Development". The Journal of Systems and Software, 58(2):171-180, 2001.<br />
<br />
[NKF94] Nuseibeh, B., Kramer J. and Finkelstein, A. : A Framework for Expressing the Relationships Between Multiple Views in Requirements Specifications. IEEE Transactions on Software Engineering, vol. 20, no. 10, pp. 760--773 (1994) <br />
<br />
[OMG10] Software Process Engineering Metamodel V2.0 specification. Object Management Group. July 2010.<br />
<br />
[SCW+10] A. Shaikh, R. Clarisó, U. Wiil, N. Memon, Verification-driven slicing of UML/OCL models,. In Proceedings of 25th IEEE/ACM International Conference on Automated Software Engineering (ASE'10), 2010, pp. 185-194.<br />
<br />
[SE05] Sabetzadeh, M. and Easterbrook, S.: An Algebraic Framework for Merging Incomplete and Inconsistent Views. 13th IEEE RE Conference, Paris, France, 2005.<br />
<br />
[SN07] M. Sabetzadeh, S. Nejati, S. Liaskos, S. Easterbrook, M. Chechik. "Consistency Checking of Conceptual Models via Model Merging". In Proceedings of 15th IEEE International Requirements Engineering Conference (RE’07), pages 221-230, New Delhi, India, October 2007. <br />
<br />
[SN08] M. Sabetzadeh, S. Nejati, S. Easterbrook, M. Chechik. "Global Consistency Checking of Distributed Models with TReMer+". In Proceedings of 30th International Conference on Software Engineering (ICSE’08), pages 815-818, Leipzig, Germany, May 2008. Formal Research Demonstration.<br />
<br />
[SC07] R. Salay, M. Chechik, S. Easterbrook, Z. Diskin, S. Nejati, M. Sabetzadeh, P. McCormick, P. Viriyakattiyaporn. An Eclipse-Based Model Management Framework. In Proceedings of OOPSLA'07 Workshop on Eclipse Technology, November 2007. <br />
<br />
[SME09] Salay, R., Mylopoulos, J., Easterbrook, S.M.: Using Macromodels to Manage Collections of Related Models. In Proc. CAiSE 2009, pp. 141--155 (2009).<br />
<br />
[SM09] Salay, R., Mylopoulos, J.: Improving Model Quality Using Diagram Coverage Criteria. In Proc. CAiSE 2009, pp. 186--200 (2009).<br />
<br />
[SM10] Salay, R., Mylopoulos, J.:The Model Role Level – A Vision. To appear in ER 2010 (2010).<br />
<br />
[Wei81] M. Weiser, Program slicing, in: Proceedings of the 5th International Conference on Software Engineering (ICSE'81), 1981, pp. 439–449.<br />
<br />
[Wij05] J. Wijsen: Database Repairing using Updates. ACM Transactions on Database Systems, 30(3):722--768, 2005.</div>
128.100.4.130
http://se.cs.toronto.edu/index.php/Shiva_Nejati
Shiva Nejati
2010-12-22T20:36:55Z
<p>128.100.4.130: </p>
<hr />
<div> <br />
<br />
* Home<br />
* People<br />
* Publications<br />
* Theses<br />
* Courses<br />
* Projects<br />
* Events<br />
* Info for New Students<br />
* Reading Group<br />
* Software<br />
* Funding<br />
* FM Links<br />
<br />
<br />
PhD :: Shiva Nejati<br />
<br />
Publications<br />
==Interests==<br />
model checking and automated reasoning<br />
<br />
==Contact Information==<br />
<br />
<br />
==PHD Thesis==<br />
<br />
[ 2003 -- Current ] N/A<br />
==Advisors==<br />
<br />
<br />
* Marsha Chechik<br />
<br />
<br />
==MSC Thesis==<br />
<br />
[ 2002 -- 2003 ] Refinement Relations on Partial Specifications<br />
==Advisors==<br />
<br />
<br />
* Marsha Chechik<br />
<br />
<br />
==Publications==<br />
<br />
<br />
2003<br />
<br />
===2003===<br />
<br />
<br />
* S. Nejati. ``Refinement Relations on Partial Specifications'' , Master Thesis, Department of Computer Science, University of Toronto, Toronto, Ontario, Canada, June, 2003<br />
<br />
* S. Nejati and A. Gurfinkel. ``Stuttering Refinement on Partial Systems'', in Proceedings of the Eighteenth Annual IEEE Symp.on Logic in Computer Science (LICS'03), Short Paper, June, 2003<br />
<br />
<br />
<br />
For questions and suggestions contact the webmaster<br />
Formal Methods Group, CS Department, University of Toronto 2004<br />
<br />
[[Category:PhD Student]]</div>
128.100.4.130
http://se.cs.toronto.edu/index.php/Security_and_Privacy_for_Internet_Services
Security and Privacy for Internet Services
2010-12-22T20:36:53Z
<p>128.100.4.130: </p>
<hr />
<div>{{Project|<br />
over=<br />
Security and, to a lesser extent, privacy have been active research areas in computing for a long time. Techniques such as access controls and firewalls have been developed to protect data, programs, and more recently networks, from attacks or other infringements. However, most of these techniques were developed for earlier generations of computing environments that were usually under the control of a single, closed jurisdiction -- such as a single enterprise with a well-defined boundary. The open Internet environment, together with new business and organizational practices, has increased the complexity of security and privacy considerations dramatically. In such a setting, a system could potentially interact with and share information with many other systems, often based on ad hoc and dynamically negotiated configurations. Traditional models and techniques for characterizing and analyzing security and privacy are ill-equipped to deal with the much higher social complexity that is implicit in this new setting.<br />
<br />
This project aims to develop a methodological framework for achieving security and privacy for internet services. We recognize that security and privacy issues originate from human concerns and intents, and thus should be modeled through social concepts such as strategic social actors and social dependency networks. Social concepts are extended to cover relationships among software systems and components as well, as human intentions are embedded in and exercised through software.<br />
<br />
We will use this methodological framework to build and maintain a knowledge repository containing best practices in security and privacy related to internet services. Using this repository, we plan to build an interactive tool that will support system designers, administrators and assessment officers by bringing relevant knowledge to bear at decision points.|<br />
news=|<br />
collab = |<br />
pi=[[Eric Yu]] and [[John Mylopoulos]]|<br />
faculty = |<br />
funding = [http://www.bul.utoronto.ca/ Bell University Labs]|<br />
tag = spis|<br />
username= setoronto|<br />
title = SPIS |<br />
students = <br />
* [[Golnaz Elahi]]<br />
* [[Nicola Zannone]]<br />
==== Recent students ====<br />
* [[Frank Zhihua Hu]]<br />
* [http://www.cs.toronto.edu/%7Ezli/ Zheng Li]<br />
* [http://www.cs.toronto.edu/%7Eliu/ Lin Liu]|<br />
}}</div>
128.100.4.130
http://se.cs.toronto.edu/index.php/Samar_Sabie
Samar Sabie
2010-12-22T20:36:52Z
<p>128.100.4.130: </p>
<hr />
<div>I decided to move everything to a blog because editing in wiki became tiresome for me.<br />
Here's the [http://samar-sabie.blogspot.com/ new blog].</div>
128.100.4.130
http://se.cs.toronto.edu/index.php/SE_Seminar
SE Seminar
2010-12-22T20:36:51Z
<p>128.100.4.130: </p>
<hr />
<div>==Preamble==<br />
Our SE Seminar takes place each week to present the work of SE group members and invited speakers. <br />
Attendance is open to '''everybody'''. Feel free to propose new seminars. Contact [mailto:gelahi@cs.utoronto.ca Golnaz] to arrange a talk. <br />
<br />
* When: (typically) 2pm - 4pm, Wednesdays <br />
* Where: SE Debugging Room (the lounge area in the SE Lab)<br />
<br />
== Upcoming Seminars ==<br />
<br />
'''Thursday, October 28th, 1pm, debugging room (3234)''' <br />
<br />
'''Interactive Goal Model Analysis Applied - Systematic Procedures versus Ad hoc Analysis, Jennifer Horkoff'''<br />
<br />
<br />
Abstract. Intentional modeling, capturing the goals of stakeholders, has been proposed as a means of early system elicitation and design for an enterprise, focusing on social and strategic requirements. It is often assumed that more utility can be gained from goal models by applying explicit analysis over models, but little work has been devoted to understand how or why this occurs. In this work we test existing hypotheses concerning interactive goal model analysis via multiple case studies. Previous results have indicated that such analysis increases model iteration, prompts further elicitation, and improves domain knowledge. Results of the new studies do not provide strong evidence to support these claims, showing that such benefits, when they occur, can occur both with systematic and ad-hoc model analysis. However, the results reveal other benefits of systematic analysis, such as a more consistent interpretation of the model, more complete analysis, and the importance of training. <br />
<br />
----<br />
'''Wed, November 10th, 2pm, debugging room (3234)'''<br />
<br />
''[http://www.jureta.net Ivan Jureta], FNDP/Université de Namur''<br />
<br />
'''Techne: Towards a New Generation of Requirements Modeling Languages with Goals, Preferences, and Inconsistency Handling'''<br />
<br />
Techne is an abstract requirements modeling language that lays formal<br />
foundations for new modeling languages applicable during early phases<br />
of the requirements engineering process. During these phases, the<br />
requirements problem for the system-to-be is being structured, its<br />
candidate solutions described and compared in terms of how desirable<br />
they are to stakeholders. We motivate the need for Techne, introduce<br />
it through examples, and sketch its formalization.<br />
<br />
----<br />
<br />
[[Previous talks]]</div>
128.100.4.130
http://se.cs.toronto.edu/index.php/SEWikiAccounts
SEWikiAccounts
2010-12-22T20:36:50Z
<p>128.100.4.130: </p>
<hr />
<div>Accounts on this wiki are restricted to members of the SE group. This determination is made automatically based on who is the sponsor of your CSLab unix account.<br />
<br />
To apply for a CSLab unix account, fill out the form at http://www.cs.toronto.edu/cgi-bin/acctform.cgi and then bug your supervisor to approve the request. Note your "login name".<br />
<br />
After your CSLab unix account is created, you will automatically get a mediawiki account on the server for <nowiki>http://se.cs.toronto.edu</nowiki>. To set a password for mediawiki, go to the login page, type your CSLab account name, and press "E-mail password". It will mail a new password to yourname@cs.toronto.edu. (Accounts are automatically created on this server several times a day, but your [[POC]] can accelerate the process upon request.)<br />
<br />
If you already have a CSLab unix account, and your sponsor is in SE, you already have a mediawiki account (but you may have to set your password as described above). If you think you should have a mediawiki account but don't, please contact your [[POC]].</div>
128.100.4.130
http://se.cs.toronto.edu/index.php/Robert_Will/Movie_Night
Robert Will/Movie Night
2010-12-22T20:36:48Z
<p>128.100.4.130: </p>
<hr />
<div>#REDIRECT [[User:Robertw/Movie Night]]</div>
128.100.4.130
http://se.cs.toronto.edu/index.php/Robert_Will
Robert Will
2010-12-22T20:36:47Z
<p>128.100.4.130: </p>
<hr />
<div>Robert Jack 啸风 Will, MSc student in the Formal Methods group advised by [[Eric Hehner]] from 2007 to 2009. Subsequently moved to Germany. Last known on-line presence: [http://real-world-phenomena.org http://real-world-phenomena.org]<br />
<br />
== Master's thesis: Constructing Calculations from Consecutive Choices (C4P) ==<br />
<br />
My thesis describes a computer program which helps learning and professional programmers as well as mathematicians to construct and review calculations and calculational proofs. For didactic and other purposes, the responsibility for the design of programs and proofs is on the human user. The computer, on the other hand, does all the substitution and formal pattern matching work, thereby freeing the human from petty work and guaranteeing correctness of proofs and calculations. It will also offer substantial guidance for the human intuition that's driving the proofs. <br />
The thesis specifies such a proof tool by describing the logical language, the rules of proof, and how the user can interact with proofs. <br />
<br />
[http://www.cs.toronto.edu/~robertw/MSc_report.pdf Full text “Constructing Calculations from Consecutive Choices – a Tool to Make Proofs More Transparent”]</div>
128.100.4.130
http://se.cs.toronto.edu/index.php/Requirements_monitoring
Requirements monitoring
2010-12-22T20:36:46Z
<p>128.100.4.130: </p>
<hr />
<div>{{Project|<br />
over=The Requirements Monitoring project, in conjunction with Computer Associates, investigates automated techniques for detecting system errors based on a comprehensive goal model of system intentions.|<br />
pi=[[John Mylopoulos|John Mylopoulos]]|<br />
faculty=|<br />
students=<br />
* [http://www.cs.toronto.edu/~yw Yiqiao Wang]|<br />
collab=|<br />
funding=<br />
* [http://ca.com/ca/en/ Computer Associates]<br />
}}</div>
128.100.4.130
http://se.cs.toronto.edu/index.php/Reading_Groups
Reading Groups
2010-12-22T20:36:45Z
<p>128.100.4.130: </p>
<hr />
<div>Reading groups are weekly or bi-weekly gatherings to discuss papers of common interest. These are typically organized around a central theme.<br />
<br />
* [[Eastern Promises]] - empirical software engineering, requirements<br />
* [[ReadingGroup|Formal Methods]]<br />
* Modeling<br />
* [[Types]]</div>
128.100.4.130
http://se.cs.toronto.edu/index.php/ReadingGroup
ReadingGroup
2010-12-22T20:36:44Z
<p>128.100.4.130: </p>
<hr />
<div>Our Reading Group meets each week to discuss papers suggested by our members. Attendance is open to '''everybody'''. Feel free to add paper suggestions to our paper queue. Occasionally, members give practice talks. Contact Zak to arrange a talk. <br />
<br />
* When: Wednesdays at noon<br />
* Where: SE Debugging Room (BA3234)<br />
<br />
==Current session (2010)==<br />
<br />
===Timeline===<br />
* '''Oct 27''' Impressions of FLoC<br />
<br />
===Topic Lists===<br />
<br />
====Ordered Topics====<br />
*Overview of the different projects that have been/are being carried out by the group.<br />
*Read survey paper on static analysis techniques<br />
::Title: A Survey of Automated Techniques for Formal Software Verification<br />
::Authors: D'Silva, Kroening, Weissenbacher<br />
::This paper appears in: [http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4544862]<br />
*Abstract Interpretation - foundations, applications. Probably will start by going through existing slides on the topic, then looking at papers.<br />
*Model Checking, CEGAR<br />
*Terminator<br />
*Category Theory: Steve's slides, youtube lectures<br />
*Model Fusion work<br />
*Type Theory, Partial Evaluation<br />
<br />
====Other topics (not yet ordered)====<br />
*Carolyn offered to give a talk at some point on her work on subway systems<br />
*Pi-calculus<br />
*Recovery<br />
*Concurrency<br />
*Hybrid Model Checking<br />
*Term Rewriting<br />
*Contracts and concurrency<br />
<br />
==Previous Sessions==<br />
<br />
===2009===<br />
* '''Oct 8, 15, 22''' David Barton, ''Notations for systems design''<br />
* '''Oct 1''' Shoham Ben-David, ''Applications of Description Logic and Causality in Model Checking''<br />
* '''Sep 17''' Prof. Rick Hehner, ''a Probability Perspective''<br />
* '''Sep 10''' Prof. Rick Hehner, ''from Boolean Algebra to Unified Algebra''<br />
* '''Sep 3''' Aws Albarghouthi, ''Abstract analysis of concrete execution''<br />
* '''Sep 2''' Anya Tafliovich, ''Predicative Quantum Programming''<br />
* '''Aug 27''' [http://berkeley.intel-research.net/dgay/pubs/09-icse-deadlock.pdf Effective Static Deadlock Detection (Mayur Naik, Chang-Seo Park, Koushik Sen, David Gay) Winner of ACM SIGSOFT Distinguished Papers Award]<br />
* '''Aug 20''' Jianwei Niu, ''Formal Models for Group-Centric Secure Information Sharing''<br />
* '''Aug 13''' Arie Gurfinkel, ''Decision Diagrams for Linear Arithmetic''<br />
* '''Aug 5''' Marsha, ''partial behavioral models''<br />
* '''July 30''' Shoham Ben-David, ''Explaining Counterexamples Using Causality''<br />
* '''July 23''' Aws, ''On the Use of Planning Technology for Verification''<br />
* '''July 10''' Borys Bradel's practice talk, ''Extending Goal Models with a Probability Model and using Bayesian Networks''<br />
* '''June 26, July 3''' Ou went over the slides about abstraction in software model checking that Marsha and Arie presented in the tutorial at FM 2006: [http://fm06.mcmaster.ca/FMTutorial06-handout.pdf Model-Checking: From Hardware to Software]<br />
* '''June 5, 19''' Talks by Albert: ''a Practical Theory of Programming'', ''Alternatives to Prof. Hehner's theory''<br />
* '''May 1,8, 29''' Byron Cook's TERMINATOR [http://www.dcs.qmul.ac.uk/~byron/berk1.pdf part 1] [http://www.dcs.qmul.ac.uk/~byron/berk2.pdf part 2] [http://www.dcs.qmul.ac.uk/~byron/berk3.pdf part 3] [http://byroncook.blogspot.com/ Byron Cook's blog]<br />
* '''Apr 17''' [http://research.microsoft.com/en-us/um/cambridge/projects/terminator/binreach.pdf Termination Proofs for Systems Code]<br />
* '''Apr 3''' [http://www.springerlink.com/content/v36q54n726773812/fulltext.pdf Model Checking Programs]<br />
* '''Mar 27''' We will continue our review of the Abstract Interpretation slides located at: [http://web.mit.edu/afs/athena.mit.edu/course/16/16.399/www/ link]<br />
* '''Mar 20''' Practice talk by Anya Tafliovich, ''Programming with Quantum Communication''<br />
* '''Mar 13''' [http://www.di.ens.fr/~cousot/publications.www/Cousot-LNCS2000-sv-sb.pdf Abstract Interpretation Based Formal Methods and Future Challenges] [http://www.di.ens.fr/~cousot/publications.www/Top4-Abst-Int-1-PC-RC.pdf Basic Concepts of Abstract Interpretation]<br />
* '''Mar 6''' [http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4544862 A Survey of Automated Techniques for Formal Software Verification].<br />
* '''Feb 13''' Jordi Cabot, ''Verifying UML/OCL Operation Contracts''<br />
<br />
===2008===<br />
* '''Jun 10''' Albert Lai talked about operational semantics, soundness of refinements, and lazy evaluation. [http://www.cs.toronto.edu/~trebla/talk-jun-10-2008.pdf slides]<br />
* '''May 29''' Albert Lai talked about a theorem on loop invariants. [http://www.cs.toronto.edu/~trebla/loop-talk-may-25-2008.pdf slides]<br />
* '''Jan 15''' Jocelyn Simmonds talked about using SAT solvers in practice and SAT competitions (using N. Een's FMCAD '07 tutorial slides)<br />
* '''Jan 8''' Jocelyn Simmonds talked about SAT solvers (using N. Een's FMCAD '07 tutorial slides)<br />
<br />
===2007===<br />
* '''Dec 11''' Robert Will talked briefly about Paramodulation<br />
* '''Dec 4''' Albert Lai talked about Rewriting.<br />
* '''Nov 27''' Robert Will described window inference and gave a demo of the Ergo theorem prover.<br />
* '''Nov 20''' Anya Tafliovich described tableaux and model elimination.<br />
* '''Nov 13''' Ric Hehner described resolution.<br />
* '''Nov 6''' Bowen, J.P., Hinchey, M.G. "Ten commandments of formal methods... ten years later", IEEE Computer, Jan. 2006, pp. 40-48.<br />
* '''Nov 2''' Jocelyn Simmonds, "Exploiting Resolution Proofs to Speed Up LTL Vacuity Detection for BMC" (practice talk for FMCAD), Planning session for Reading Group (see [[ReadingGroup#Topic Lists]])<br />
* '''Oct 16''' Continued discussing the previous paper (Oct 2)<br />
* '''Oct 2''' O. Grumberg, A. Schuster, A. Yadgar. "3-Valued Circuit SAT for STE with Automatic Refinement", 5th International Symposium on Automated Technology for Verification and Analysis (ATVA'07), Tokyo, October 2007. [http://www.cs.technion.ac.il/users/orna/Yadgar-atva07.pdf Paper]<br />
* '''Sept 25''' A. Datta, A. Derek, J.C. Mitchell, V. Shmatikov, and M. Turuani, "Probabilistic polynomial-time semantics for a protocol security logic" (invited), 32nd International Colloquium on Automata, Languages and Programming (ICALP '05), Lisbon, July, 2005.<br />
* '''Sept 18''' Jayadev Misra, "Powerlist: A Structure for Parallel Recursion", TOPLAS, Vol. 16, No. 6, pp. 1737-1767, November 1994 '''(talk)'''<br />
* '''Sept 13''' Naghmeh Ghafari, "Algorithmic Analysis of Piecewise FIFO Systems" '''(talk)'''<br />
* '''May 9''' Kelvin Ku, "Software Model-Checking for Buffer Overflow Analysis: Benchmarking as an Aid to Tool Development" '''(talk)'''<br />
* '''Apr 25''' B. Gulavani, T. Henzinger, Y. Kannan, A. Nori and S. Rajamani. "SYNERGY: a new algorithm for property checking", SIGSOFT '06/FSE-14: Proceedings of the 14th ACM SIGSOFT International Symposium on Foundations of Software Engineering<br />
* '''Apr 4''' Mihaela Bobaru, "Finding Environment Guarantees", FASE '07 presentation '''(talk)'''. We also read the following paper: Marcelo F. Frias, Carlos G. Lopez Pombo, and Mariano M. Moscato, "Alloy Analyzer+PVS in the Analysis and Verification of Alloy Specifications", TACAS '07, Best paper award<br />
* '''Mar 28''' Orna Kupferman and Sarai Sheinvald-Faragy, "Finding Shortest Witnesses to the Nonemptiness of Automata on Infinite Words", CONCUR '06<br />
* '''Mar 21''' K. McMillan, "Lazy Abstraction with Interpolants", CAV'06<br />
* '''Mar 14''' Ranjit Jhala and Kenneth L. McMillan. "A practical and complete approach to predicate refinement". In TACAS, pages 459-473, 2006<br />
* '''Mar 7''' Ranjit Jhala, Rupak Majumdar, and Ru-Gang Xu, "State of the Union: Type Inference via Craig Interpolation", TACAS '07<br />
* '''Feb 28''' Niklaus Wirth. "Good Ideas, Through the Looking Glass", IEEE Computer, Jan. 2006, pp. 56-68<br />
* '''Feb 14''' (Presented by Kelvin Ku) Himanshu Jain, Franjo Ivancic, Aarti Gupta, Ilya Shlyakhter, Chao Wang. "Using Statically Computed Invariants inside the Predicate Abstraction and Refinement Loop", CAV '06<br />
* '''Feb 7''' Continued discussing the previous paper (Jan 31). Anya voluteered to look up SERE/PSL, Jocelyn tried to explain LTL testers, and Mihaela looked into tester composition.<br />
* '''Jan 31''' A. Zaks and A. Pnueli, "PSL Model Checking and Run-time Verification via Testers", FM 06.<br />
* '''Jan 24''' Continued discussing the previous paper (Jan 17)<br />
* '''Jan 17''' Yifeng Chen, "Semantic Inheritance in Unifying Theories of Programming" (referee: Ric)<br />
* '''Jan 10''' Greta Yorsh, Thomas Ball, and Mooly Sagiv, "Testing, Abstraction, Theorem Proving: Better Together!", International Symposium on Software Testing and Analysis, 2006<br />
<br />
===2006===<br />
<br />
* '''Dec 13''' Continued discussing the previous paper (Dec 6)<br />
* '''Dec 6''' Carroll Morgan, "The Shadow Knows: Refinement of Ignorance in Sequential Programs", MPC 2006<br />
* '''Nov 29''' Caires and Cardelli, "A Spatial Logic for Concurrency (Part I)"<br />
* '''Nov 22''' Lori Clarke and David Rosenblum, "A Historical Perspective on Runtime Assertion Checking in Software Development", ACM SIGSOFT Software Engineering Notes, May 2006<br />
* '''Nov 15''' Dershowitz, Hanna, Nadel, "A Scalable Algorithm for Minimal Unsatisfiable Core Extraction", SAT'06<br />
* '''Nov 8''' Rajeev Alur and P. Madhusudan, "Adding Nesting Structure to Words", Tenth International Conference on Developments in Language Theory (DLT06) (Invited paper)<br />
* '''Nov 1''' Shiva Nejati, "Thorough Checking Revisited" '''(practice talk for FMCAD)''', Yuan Gan, "Runtime Monitoring for Web Services" '''(talk)'''<br />
* '''Oct 25''' Beyer, Henzinger and Theoduloz, "Lazy Shape Analysis", CAV 06<br />
* '''Oct 11''' Wonhong Nam and Rajeev Alur, "Learning-based Symbolic Assume-guarantee Reasoning with Automatic Decomposition", Fourth International Symposium on Automated Technology for Verification and Analysis (ATVA06)<br />
* '''Oct 4''' Larsen, Nyman, Wasowski, "Interface Input/Output Automata", FM06<br />
* '''Sept 27''' E.C.R Hehner, "Unified Algebra" '''(talk)'''<br />
* '''Sept 14''' Mihaela Bobaru talked about the work she did during her summer internship at NASA<br />
* '''Jul 20''' Kelvin Ku, "Planning under Uncertainty: A Model Based Approach" '''(talk)'''<br />
* '''Jul 13''' Steve Easterbrook, "Introduction to Category Theory" '''(talk)'''. Rick Salay, "Theory of Institutions and Its Applications in Software Engineering" '''(talk)'''<br />
* '''Jul 6''' T. Legall, B. Jeannet, and T. Jron, "Verification of Communication Protocols using Abstract Interpretation of FIFO Queues", from AMAST 06<br />
* '''Jun 29''' Ji Zhang and Betty Cheng, "Model-Based Development of Dynamically Adaptive Software", (Best paper of ICSE 2006)<br />
* '''Jun 22''' Junfeng Yang, Can Sar, Paul Twohey, Cristian Cadar, and Dawson Engler, "Automatically generating malicious disks using symbolic execution" from IEEE Security and Privacy 2006<br />
* '''Jun 15''' Anya Tafliovich, "Predicative Quantum Programming", MPC 2006<br />
* '''Jun 8''' Henk Barendregt, "Introduction to Generalized Type Systems", Journal of Functional Programming, 1(2):124-154, 1991<br />
* '''Jun 1''' David Harel and Bernhard Rumpe, "Meaningful Modeling: What's the Semantics of "Semantics"?", in IEEE Computer, Oct. 2004<br />
* '''May 25''' Corina Pasareanu and Willem Visser, "Verification of Java Programs Using Symbolic Execution and Invariant Generation", SPIN 2004<br />
* '''May 9''' Continued discussing the previous paper (May 2)<br />
* '''May 2''' Ioannis T. Kassios, "Dynamic Frames: Support for Framing, Dependencies and Sharing without Restrictions". In J. Misra, T. Nipkow, E. Sekerinski (eds.) Formal Methods 2006, vol. 4085 of Lecture Notes in Computer Science, pages 268-283. Springer-Verlag, 2006. Best paper award.<br />
* '''Apr 25''' Continued discussing the previous paper (Apr 18) <br />
* '''Apr 18''' Peter W. O'Hearn, John C. Reynolds, and Hongseok Yang, "Local Reasoning about Programs that Alter Data Structures" in CSL 2001<br />
* '''Apr 11''' Dino Distefano, Peter W. O'Hearn, Hongseok Yang, "A Local Shape Analysis Based on Separation Logic", TACAS 2006<br />
* '''Apr 4''' Marsha Chechik and Arie Gurfinkel discussed their experiences at ETAPS '06. We also read the following paper: Bhargav S. Gulavani, Sriram K. Rajamani, "Counterexample Driven Refinement for Abstract Interpretation", TACAS 2006<br />
* '''Mar 21''' Arie Gurfinkel, "Why Waste a Perfectly Good Abstraction?" '''(practice talk for TACAS '06)'''<br />
* '''Mar 14''' R. Alur, P. Madhusudan, and Wonhong Nam, "Symbolic Compositional Verification by Learning Assumptions", CAV 2005<br />
* '''Mar 7''' Tamarah Arons, Elad Elster et al, "Formal Verification of Backward Compatibility of Microcode" CAV 2005<br />
* '''Feb 28''' Paper: "Angelic Nondeterminacy as External Choice in CSP" (need more info)<br />
* '''Feb 21''' Arie Gurfinkel, "Supporting Construction, Analysis, and Understanding of Software Models" '''(talk)'''<br />
* '''Feb 14''' Edmund Clarke, Muralidhar Talupur, Helmut Veith, "Environment Abstraction for Parameterized Verification", VMCAI 2006<br />
* '''Jan 31''' E.C.R Hehner, "Retrospective and Prospective for Unifying Theories of Programming" '''(talk)'''<br />
* '''Jan 24''' Xavier Leroy, "Formal Certification of a Compiler Back-end, or Programming a Compiler with a Proof Assistant", POPL 06<br />
* '''Jan 17''' Corina S. Psreanu, Radek Pelnek and Willem Visser, "Concrete Model Checking with Abstract Matching and Refinement", CAV 2005<br />
* '''Jan 12''' Naghmeh Ghafari and Richard Trefler, "Piecewise FIFO Channels Are Analyzable", VMCAI 2006<br />
* '''Jan 5''' Arie Gurfinkel, "Systematic Construction of Abstractions for Model-Checking" '''(practice talk for VMCAI 2006)'''<br />
<br />
===2005===<br />
<br />
* '''Dec 15''' Jonathan Amir, "Verification of Web Services" '''(talk)'''<br />
* '''Dec 7''' Aysu Betin-Can and Tevfik Bultan, "Interface-based specification and verification of concurrency controllers", SoftMC 2003<br />
* '''Dec 1''' T. Henzinger, R. Jhala, R. Majumdar, "Permissive Interfaces", FSE 2005<br />
* '''Nov 23''' Continued reading a previous paper (Nov 18)<br />
* '''Nov 18''' B. Cook, A. Podelski, and A. Rybalchenko, "Abstraction Refinement for Termination", SAS 05. Shiva Nejati, Justin Ward and Jocelyn Simmonds discussed their experiences at ASE '05.<br />
* '''Nov 10''' O. Grumberg, O, Strichman, F. Lerda, and M. Theobald, "Proof-Guided Underapproximation-Widening for Multi-Process Systems", POPL 2005<br />
* '''Nov 3''' Shiva Nejati and Justin Ward gave practice talks for ASE '05 '''(talk)'''<br />
* '''Oct 26''' Leslie Lamport, "Real-time model checking is really simple", CHARME 2005<br />
* '''Oct 20''' Jocelyn Simmonds, practice talk for ASE '05 '''(talk)'''<br />
* '''Oct 12''' W.J. Fokkink, J.F. Groote and M.A. Reniers, "Process algebra needs proof methodology", from The Concurrency Column, Bulletin of the EATCS, February 2004<br />
* '''Oct 6''' Cindy Eisner, Dana Fisman, and John Havlicek, "A Topological Characterization of Weakness", PODC 2005<br />
* '''Sept 29''' Patrick Maier, "Compositional Circular Assume-Guarantee Rules Cannot Be Sound and Complete", FoSSaCS 2003<br />
<br />
==Paper Queue==<br />
<br />
Papers are not listed in any particular order. Feel free to add/remove papers/categories.<br />
<br />
===General SE===<br />
* E. M. Clarke Jr. "Programming language constructs for which it is impossible to obtain good Hoare axiom systems", in Journal of the ACM v26 #1 p129-147, January 1979 (earlier POPL'77) [http://www.cs.utoronto.ca/~trebla/tmp/p129-clarke.pdf PDF]<br />
* B. Moller, "The Linear Algebra of UTP"<br />
<br />
===Abstraction===<br />
* Thomas Ball, Byron Cook, Shuvendu K. Lahiri, and Lintao Zhang. "Zapato: Automatic theorem proving for predicate abstraction refinement", in CAV, pages 457-461, 2004<br />
* Arie Gurfinkel and Marsha Chechik. "Why waste a perfectly good abstraction?". In TACAS, pages 212-226, 2006<br />
* Shuvendu K. Lahiri, Randal E. Bryant, and Byron Cook. "A symbolic approach to predicate abstraction". In CAV, pages 141-153, 2003.<br />
* Thomas A. Henzinger, Ranjit Jhala, Rupak Majumdar, and Kenneth L. McMillan. "Abstractions from proofs". In POPL, pages 232-244, 2004.<br />
* Ranjit Jhala and Kenneth L. McMillan. "A practical and complete approach to predicate refinement". In TACAS, pages 459-473, 2006.<br />
* Daniel Kroening and Georg Weissenbacher. "Counterexamples with loops for predicate abstraction". In CAV, pages 152-165, 2006.<br />
<br />
===Alias Analysis===<br />
* Dzintars Avots, Michael Dalton, V. Benjamin Livshits, and Monica S. Lam. "Improving software security with a C pointer analysis". In ICSE, pages 332-341, 2005.<br />
* Manuvir Das, Ben Liblit, Manuel Fähndrich, and Jakob Rehof. "Estimating the impact of scalable pointer analysis on optimization". In SAS, pages 260-278, 2001.<br />
<br />
===Static Analysis===<br />
* Yichen Xie, Andy Chou, and Dawson R. Engler. "Archer: using symbolic, path-sensitive analysis to detect memory access errors". In ESEC / SIGSOFT FSE, pages 327-336, 2003.<br />
* Bruno Blanchet, Patrick Cousot, Radhia Cousot, Jérôme Feret, Laurent Mauborgne, Antoine Miné, David Monniaux, and Xavier Rival. "A static analyzer for large safety-critical software". In PLDI, pages 196-207, 2003.<br />
* Patrick Cousot, Radhia Cousot, Jérôme Feret, Laurent Mauborgne, Antoine Miné, David Monniaux, and Xavier Rival. "The astreé analyzer". In ESOP, pages 21-30, 2005.<br />
* Patrick Cousot and Nicolas Halbwachs. "Automatic discovery of linear restraints among variables of a program". In POPL, pages 84-96, 1978.<br />
* Nurit Dor, Michael Rodeh, and Shmuel Sagiv. "CSSV: towards a realistic tool for statically detecting all buffer overflows in C". In PLDI, pages 155-167, 2003.<br />
* Vinod Ganapathy, Somesh Jha, David Chandler, David Melski, and David Vitek. "Buffer overrun detection using linear programming and static analysis". In ACM Conference on Computer and Communications Security, pages 345-354, 2003.<br />
* Brian Hackett, Manuvir Das, Daniel Wang, and Zhe Yang. "Modular checking for buffer overflows in the large". In ICSE '06: Proceeding of the 28th international conference on Software engineering, pages 232-241, New York, NY, USA, 2006. ACM Press.<br />
* Antoine Miné. "The octagon abstract domain". In WCRE, pages 310-, 2001.<br />
<br />
===Tool Comparisons===<br />
* John Wilander and Mariam Kamkar. "A comparison of publicly available tools for static intrusion prevention". In Proceedings of the 7th Nordic Workshop on Secure IT Systems, pages 68-84, Karlstad, Sweden, November 2002.<br />
<br />
===Lazy Abstraction and Refinement (BLAST)===<br />
* Ranjit Jhala. "Program Verification by Lazy Abstraction". PhD thesis, University of California at Berkeley, 2004.<br />
<br />
===Program Derivation===<br />
* Andy Gill and Graham Hutton. "The worker/wrapper transformation". In preparation for submission to the Journal of Functional Programming, December 2007. [http://www.cs.nott.ac.uk/~gmh/bib.html#wrapper Abstract and PDF]<br />
* José Bacelar Almeida and Jorge Sousa Pinto. "Deriving Sorting Algorithms". Tech Report at Universidade do Minho, Portugal, April 2006. 27 pages. [http://arxiv.org/abs/0802.3881 arXiv record]<br />
<br />
===Miscellaneous===<br />
* Sumit Nain and Moshe Vardi. "Branching vs. Linear Time: Semantical Perspective". ATVA'07 invited paper. [http://www.cs.rice.edu/~vardi/papers/atva0711.pdf paper PDF] and [http://www.cs.rice.edu/~vardi/papers/atva07tk.pdf slides PDF]<br />
* Hongwei Xi et al. The [http://www.ats-lang.org/ ATS] dependently-typed programming and proving language. (To be refined to a paper citation or a demo.)<br />
* Leslie Lamport. "The PlusCal Algorithm Language". 2009. [http://research.microsoft.com/en-us/um/people/lamport/pubs/pubs.html#pluscal Abstract and PDF]<br />
* Himanshu Jain, Franjo Ivancic, Aarti Gupta, Ilya Shlyakhter, and Chao Wang. "Using statically computed invariants inside the predicate abstraction and refinement loop". In CAV, pages 137-151, 2006.<br />
* Himanshu Jain, Franjo Ivancic, Aarti Gupta, and Malay K. Ganai. "Localization and register sharing for predicate abstraction". In TACAS, pages 397-412, 2005.<br />
* Franjo Ivancic, Ilya Shlyakhter, Aarti Gupta, and Malay K. Ganai. "Model checking c programs using f-soft". In ICCD, pages 297-308, 2005.<br />
* Daniel Kroening, Alex Groce, and Edmund M. Clarke. "Counterexample guided abstraction refinement via program execution. In Davies et al. [DSB04], pages 224-238. Formal Methods and Software Engineering, 6th International Conference on Formal Engineering Methods, ICFEM 2004, Seattle, WA, USA, November 8-12, 2004, Proceedings, volume 3308 of Lecture Notes in Computer Science. Springer, 2004.<br />
* Sagar Chaki and Scott Hissam. "Certifying the absence of buffer overflows". Technical Report CMU/SEI-2006-TN-030, Software Engineering Institute, Carnegie Mellon University, September 2006.<br />
* Shaunak Chatterjee, Shuvendu K. Lahiri, Shaz Qadeer, and Zvonimir Rakamaric, "A Reachability Predicate for Analyzing Low-Level Software", in TACAS '07<br />
* Muhammad Zubair Malik, Aman Pervaiz, and Sarfraz Khurshid, "Generating Representation Invariants of Structurally Complex Data", in TACAS '07</div>
128.100.4.130
http://se.cs.toronto.edu/index.php/RECaseStudies
RECaseStudies
2010-12-22T20:36:42Z
<p>128.100.4.130: </p>
<hr />
<div>==Model problems for Requirements Engineering==<br />
Mature disciplines typically have a set of 'model problems', problems which researchers around the world can use to evaluate their designs or new techniques. I've never seen a good central location for these problems for requirements engineering, so I thought I would go through most of the papers I've read and find the common case studies. Shout-out to [http://www.cs.cmu.edu/~ModProb/index.html CMU], who host a set of model problems for software architecture, three of which appear here as well.<br />
<br />
What does it take to make this list? The first constraint is that the problem is described in a domain-centred way, and without reference to a particular solution or technology. The description should be external to the paper it appears in (so any omissions are clear). There should be more than one paper referencing this problem (otherwise it isn't a 'model'). Finally, preferably there is a final implementation (or post-mortem) for comparison purposes.<br />
<br />
Updates and additions [mailto:neil@neilernst.net are welcome].<br />
<table border="1"><br />
<tr><br />
<th>Title</th><br />
<th>Description</th><br />
<th>Source Files</th><br />
<th>Cited in</th><br />
<th>Working implementation</th><br />
</tr><br />
<tr><br />
<td>BART AART</td><br />
<td>A safety-critical train control system</td><br />
<td>[http://www.bibsonomy.org/bibtex/29bd7f877f30654389e9c908d9aae29ed/neilernst citation]</td><br />
<td>[http://www.bibsonomy.org/bibtex/2555b559dcd219f980d3dd3da9cd710ff/neilernst van Lamsweerde 2001]</td><br />
<td>not clear</td><br />
</tr><br />
<tr><br />
<td>LAS</td><br />
<td>The London ambulance system</td><br />
<td>[http://www.comp.lancs.ac.uk/computing/resources/IanS/SE7/CaseStudies/LondonAmbulance/index.html Description at U. Lancaster]<br />
[http://www.cs.ucl.ac.uk/staff/a.finkelstein/las.html Anthony Finkelstein's gathering of documents]</td><br />
<td>[http://www.info.ucl.ac.be/Research/Publication/2004/avl-FSE04.pdf Lamsweerde04] [http://www.cs.ucl.ac.uk/staff/a.finkelstein/papers/lascase.pdf Finkelstein and Dowling 94]</td><br />
</tr><br />
<tr><br />
<td>Meeting Scheduler</td><br />
<td>Schedule meetings, possibly distributed</td><br />
<td>[http://www.cs.cmu.edu/~ModProb/CS.html CMU description]</td><br />
<td>Yu 1995</td><br />
<td></td><br />
</tr><br />
<tr><br />
<td>Library information system</td><br />
<td>A system for managing library tasks like checking out books.</td><br />
<td>[http://www.cs.cmu.edu/~ModProb/LIB.html CMU description]</td><br />
<td>[http://www.bibsonomy.org/bibtex/293fcaed4750a1a688dcccd8af3e9cd7f/neilernst Pejtersen 92], [http://www.bibsonomy.org/bibtex/2b29a6a271fc1d4f991acb0092c770350/neilernst Dardenne et al. 93]</td><br />
</tr><br />
<tr><br />
<td>ATM</td><br />
<td>Automated teller machine</td><br />
<td>[http://www.cs.cmu.edu/~ModProb/ATM.html CMU description]</td><br />
<td>[http://www.cs.toronto.edu/~yw/publications/ASE/wang07ase.pdf (pdf) Wang et al, ASE 2007]</td><br />
</tr><br />
</table><br />
===Non-referenced case studies===<br />
Here are some other possible case studies, as yet unreported (at least, I couldn't find a reference).<br />
<table border="1"><br />
<tr><br />
<th>Title</th><br />
<th>Description</th><br />
<th>Source Files</th><br />
<th>Working implementation</th><br />
</tr><br />
<tr><br />
<td>Ontario Portal</td><br />
<td>An e-government portal strategy</td><br />
<td>[http://www.gov.on.ca/mgs/en/IAndIT/158445.html Ministry documentation]</td><br />
<td>none</td><br />
</tr><br />
<tr><br />
<td>Gotham City Fire Dispatch</td><br />
<td>A modeling challenge at RE 09 based on the LAS case study.</td><br />
<td>[http://www.gotel.net/ntm/ Olly Gotel's summary materials]</td><br />
</tr><br />
</table><br />
<br />
=== Other (possibly apocryphal) case studies ===<br />
* Air traffic control<br />
* Package routing<br />
* Elevator<br />
* Cruise control</div>
128.100.4.130
http://se.cs.toronto.edu/index.php/PythonGUI
PythonGUI
2010-12-22T20:36:41Z
<p>128.100.4.130: </p>
<hr />
<div>* [http://boa-constructor.sourceforge.net/ Boa Constructor]<br />
* [http://pythoncard.sourceforge.net/ PythonCard]<br />
* [http://pydev.sourceforge.net/ PyDEV] for Eclipse</div>
128.100.4.130
http://se.cs.toronto.edu/index.php/Publications_in_unspecified_year
Publications in unspecified year
2010-12-22T20:36:40Z
<p>128.100.4.130: </p>
<hr />
<div>{{Year Unspecified}}</div>
128.100.4.130
http://se.cs.toronto.edu/index.php/Publications_in_2006
Publications in 2006
2010-12-22T20:36:39Z
<p>128.100.4.130: </p>
<hr />
<div>{{2006}}</div>
128.100.4.130
http://se.cs.toronto.edu/index.php/Publications_in_2005
Publications in 2005
2010-12-22T20:36:37Z
<p>128.100.4.130: </p>
<hr />
<div>{{2005}}</div>
128.100.4.130
http://se.cs.toronto.edu/index.php/Publications_in_2004
Publications in 2004
2010-12-22T20:36:36Z
<p>128.100.4.130: </p>
<hr />
<div>{{2004}}</div>
128.100.4.130
http://se.cs.toronto.edu/index.php/Publications_in_2003
Publications in 2003
2010-12-22T20:36:35Z
<p>128.100.4.130: </p>
<hr />
<div>{{2003}}</div>
128.100.4.130
http://se.cs.toronto.edu/index.php/Publications_in_2002
Publications in 2002
2010-12-22T20:36:34Z
<p>128.100.4.130: </p>
<hr />
<div>{{2002}}</div>
128.100.4.130
http://se.cs.toronto.edu/index.php/Publications_in_2001
Publications in 2001
2010-12-22T20:36:32Z
<p>128.100.4.130: </p>
<hr />
<div>{{2001}}</div>
128.100.4.130
http://se.cs.toronto.edu/index.php/Publications_by_Y._Jia
Publications by Y. Jia
2010-12-22T20:36:31Z
<p>128.100.4.130: </p>
<hr />
<div>{{Publications by Y. Jia}}</div>
128.100.4.130
http://se.cs.toronto.edu/index.php/Publications_by_W._Liu
Publications by W. Liu
2010-12-22T20:36:30Z
<p>128.100.4.130: </p>
<hr />
<div>{{Publications by W. Liu}}</div>
128.100.4.130
http://se.cs.toronto.edu/index.php/Publications_by_W._Ding
Publications by W. Ding
2010-12-22T20:36:29Z
<p>128.100.4.130: </p>
<hr />
<div>{{Publications by W. Ding}}</div>
128.100.4.130