Keynote speakers

Professor John Etchemendy, Provost of Stanford University

Heterogeneous Reasoning

During the 1990s, we developed the theory of formal heterogeneous deduction: logically valid inference that involves information expressed using multiple different representations. The Hyperproof program was developed as an implementation of that theory, and permitted deductions using sentences of first-order logic and blocks world diagrams. We have since been generalizing both the theory and the implementation to allow applications in a wide variety of domains. On the theoretical side, we have identified features of heterogeneous reasoning which occur in everyday domains, but which do not correspond to logical deductions. Examples here include applications in design, where a desired design may not be a logical consequence of some initial information, but rather is preferred based on notions of aesthetics, cost or other non-logical features of the design domain. The architecture that we developed for Hyperproof generalizes to these situations and permits the modeling of this kind of reasoning. The notion of a logically valid inference is generalized to a justified modification, where the justification may be a piece of text written by the user, a calculation of the cost of materials in a design, or any other rationale that the user finds appropriate to the domain at hand. The general principles of information flow through a structured argument remain the same in both contexts.

On the implementation side, we have designed the Openproof architecture, an application framework for building heterogeneous reasoning systems which permits "plug-ins". Individual developers can provide components to the architecture which are guaranteed to work with other existing components to create heterogeneous reasoning environments in a flexible manner. A researcher interested in Venn diagrams, for example, might implement components for this representation, and use them together with our first-order logic components to construct a proof environment for those representations. This frees researchers and developers from the tasks of developing proof management facilities, and permits a focus on the properties of the representations themselves. In this talk we will describe our generalized theory of heterogeneous reasoning motivated by examples from design and by problems taken from GRE and SAT tests. We will demonstrate a variety of applications based on the Openproof framework to show the range of heterogeneous reasoning applications which may be built using the system.

W. Bradford Paley, Columbia University and Digital Image Design Incorporated

Rich Data Representation: Sophisticated Visual Techniques for Ease and Clarity

Within the fields of Interaction Design and especially Information Visualization diagram-like constructions are used fairly often. I will leave it up to this community to define the exact characteristics that make a diagram, but expect that definition to include visual representations of abstract data entities and relationships; I also expect that clarity and ease of interpretation are among the goals for which many diagrams are created.

I have devoted my efforts toward making clear, comfortable representations of abstract data with a concentration on supporting very constrained domains of practice. As a result I believe I have found (and developed) techniques for both structuring and rendering certain kinds of abstract data, and that such techniques are sadly underutilized.

The rendering techniques largely focus on putting visual richness into the representations. In most cases I now advocate adding visual richness to complex representations-beyond just color-to texture, varying line widths, curving lines; even pseudo-3D edges, drop shadows, and node shapes that suggest their meaning. This practice is saved from being decorative, from being what might be called "chart junk," by direct ties in two directions: to the data and to the mind. The extra dimensions of visual variation some techniques offer can better show the variability within the data. And if the techniques take into account how the human eye and mind extract knowledge from natural scenes, we can ease the transformation from photons into ideas considerably.

The structuring techniques likewise tie both into data structure and the mental mechanisms of Vision and Cognitive Science, and they do so by paying attention to a mapping technique as questionable in the domain of layout as chart junk is in the domain of rendering: the use of metaphors. But well-developed domains of practice often have mappings from their abstract concepts onto more physical grounds. It helps people to understand the basics more easily, even automatize this understanding, so that work can proceed on higher levels, with bigger chunks of information.

I believe the areas of rich visual representation and metaphorically-driven layout are under researched for two reasons: difficulty and fear. It is easy, given modern tools, to change the color of a node or line; much harder to change its texture or to develop a carefully consistent and expressive variation in shape-people with the right skill set may not only be on another floor in the building, but in another school altogether. I believe fear drives some people from expressive graphics to abstract graphics: fear that embellishment of the graphics is unnecessary or even suggests embellishment of the results. Abstraction seems to be mistaken for objectivity: Bacon forbid that the reports on findings show any kind of subjectivity-perhaps the experiments themselves were tainted by the same sloppy flaw.

But with more data to hand, more easily processed than ever before; and with the increasingly complex models we can use to operate upon and understand the data, we will need stronger tools to help people conceptualize the models and understand the results. We need to understand when it is misleading or limiting (in a Whorfian way) to use a metaphor, and when one simply makes it easy for practitioners to absorb the structure. We need to understand when visual richness is there just to excite the eye, and when it helps the visual parsing of the image, the layering of the information, and when it acts as a visual mnemonic: easing the mental connection of glyph and idea. I propose we carefully distinguish objectivity in the scientific method from paucity in making the results clear. And I propose that we extend Tufte's parsimonious data/ink ratio to a more process-grounded "data/think" ratio (pardon the rhyme).

I will show many examples of work that tries to be rich in order to be clear: not distorting data to be a distracting illustration, but folding more data into the image by respecting the visual processes we engage to decode it. This work is my own, that of my students at Columbia University, and some that is over a century old: charts, graphs and maps-perhaps diagrams-that take advantage of a more ecologically-valid mode of presentation than many stripped-bare data presentations do. And along the way I will try to give some structure to how we might rigorously and repeatedly design such work, verify its readability and correctness, and figure out exactly where and what kinds of techniques can support easier thinking in the quickly rising sea of things we want to understand.

Professor Dr Wilhelm Schäfer, University of Paderborn

Professsor Dr Schäfer will be delivering a joint keynote with VL/HCC 2008.

Model Driven Development with Mechatronic UML

Model Driven Development with Mechatronic UML Visual languages form a constituent part of a well-established software development paradigm, namely model driven development. The structure and functionality of the software is precisely specified by a model which can be formally analyzed concerning important (safety and liveness) properties of the system under construction. Executable code is automatically generated from the model.

Although model-driven development has been recognized as a potential to improve significantly the productivity and quality of software. success stories are restricted to particular domains, mainly in business applications. Other domains, especially embedded systems employ model-driven development only in very special cases and on a limited scale, namely in the continuous world of control theory. This is due to the complex nature of the software of advanced mechatronic (or embedded) systems which includes complex coordination between system components under hard real-time constraints and reconfiguration of the control algorithms at runtime to adjust the behavior to the changing system goals. In addition, code generation must obey very limited and very specific resources of the target platform, i.e. the underlying hardware or operating system. Finally, techniques for modeling and verifying this kind of systems have to address the interplay between the discrete world of complex computation and communication and the "traditional" world of continuous controllers. The safety-critical nature of these systems demands support for the rigorous verification of crucial safety properties.

Our approach, called Mechatronic UML addresses the above sketched domain by proposing a coherent and integrated model-driven development approach. Modeling is based on a syntactically and semantically rigorously defined and partially refined subset of UML. It uses a slightly refined version of component diagrams, coordination patterns, and a refined version of statecharts including the notion of time. Code generation obeys the resource limits of a target platform. Verification of safety properties is based on a special kind of compositional model checking to make it scalable. The approach is illustrated using an existing cooperative project with the engineering department, namely the Railcab project (www.railcab.de). This project develops a new type of demand-driven public transport system based on existing and new rail technology.