Implementation Support Lecture 9 PDF

Document Details

StylishSpessartine

Uploaded by StylishSpessartine

جامعة العلوم والتقانة

Dania Mohamed Ahmed

Tags

interactive application programming support user interface computer science

Summary

This document provides a lecture on implementation support for interactive applications. It discusses different architectures for windowing systems and programming paradigms for interactive applications, such as the read-evaluate loop and notification-based programming.

Full Transcript

Implementation Support Lec (9) Dania Mohamed Ahmed Introduction The detailed specification outlines what an interactive application must accomplish, which the programmer then translates into machine code to operate on hardware. This...

Implementation Support Lec (9) Dania Mohamed Ahmed Introduction The detailed specification outlines what an interactive application must accomplish, which the programmer then translates into machine code to operate on hardware. This involves creating software that can handle input events and display graphics, but this low- level approach is tedious and error-prone, suited to those who enjoy complex technical challenges rather than designing user-friendly systems. Programming support tools aim to simplify this process by providing higher-level abstractions. These tools enable programmers to code in terms of the application's interaction objects rather than the underlying hardware details. By building on top of essential hardware and software services, these abstractions allow programmers to focus on the desired interaction techniques—integrating input and output in a more intuitive way, even though hardware fundamentally separates them. Architectures of windowing systems Bass and Coutaz identify three potential architectures for implementing windowing system roles, all of which separate device drivers from application programs: 1. Application-Level Management: Each application manages multiple processes independently. This approach is less ideal because it complicates synchronization with shared hardware and decreases application portability. 2. Kernel-Level Management: The operating system kernel handles process management, centralizing control and reducing the burden on individual applications. However, applications still need to be tailored to the specifics of the operating system. 3. Separate Management Application: The management function is implemented as a standalone application that interacts with other programs through a consistent, generic interface. This option offers the greatest portability across different operating systems. This final option is referred to as the client–server architecture, and is depicted in the figure below The client-server architecture Programming the Application When programming an interactive application, which acts as a client in a client-server architecture, the application’s behavior is driven by user input. Two primary programming paradigms can organize the application's flow of control, independent of the windowing system. The first paradigm is the read-evaluation loop, used internally within the application itself. For instance, on the Macintosh, this approach involves the server sending structured user inputs to the client application. The application reads these events and determines the appropriate responses based on its specific requirements. The server's role is limited to directing the event to the correct client, while the client application handles the event processing and response logic. In the read-evaluation loop paradigm, the application has full control over event processing. However, this means the programmer must handle every possible event, which can be a cumbersome task. Tools like MacApp on the Macintosh can help by automating some of this tedious work, making event management more manageable. The read–evaluate loop paradigm Programming the Application The notification-based programming paradigm involves a centralized notifier that manages event processing instead of the application itself. In this setup, the notifier receives events from the window system and filters them based on the application's interests. The application specifies which events it wants to handle and provides callback procedures for each event. When an event occurs, the notifier checks if it matches the application's interests and then invokes the corresponding callback procedure. After the callback completes, control returns to the notifier, which continues to handle further events or terminates as requested. This paradigm centralizes control within the notifier, reducing the application's burden of processing all possible events. However, this centralization can complicate tasks like implementing a pre-emptive dialog box, where the application needs to override normal event processing to handle specific user actions, such as confirming an error before proceeding. This type of task is more straightforward in the read-evaluation loop paradigm. The notification-based programming paradigm Using Toolkits In WIMP interfaces, input and output are closely linked to visual elements on the screen, creating an illusion that these elements are interactive objects. For instance, a mouse's movement corresponds with the cursor's movement on the screen, making it feel as though the user is directly manipulating the cursor. This seamless integration enhances user experience, but if this link is disrupted, users can become frustrated. When interacting with objects like buttons, input and output behaviors are combined to provide intuitive feedback. For example, a button might change appearance and provide audible feedback when clicked. Despite this illusion, input and output are technically separate at the windowing system level, requiring significant programming effort to achieve the desired effect. To simplify this, toolkits provide a higher level of abstraction, offering predefined interaction objects (or widgets) with built-in behaviors. These toolkits allow programmers to use and customize these objects easily, such as setting a button's label, without having to implement the behavior from scratch. By using toolkits, applications can be built as collections of these interaction objects, each contributing to the overall functionality and user experience. Example of behavior of a button interaction object Using Toolkits One of the advantages of programming with toolkits is that they can enforce consistency in both input form and output form by providing similar behavior to a collection of widgets. For example, every button interaction object, within the same application program or between different ones, by default could have a behavior like the one described in the previous figure. All that is required is that the developers of the different applications use the same toolkit. This consistency of behavior for interaction objects is referred to as the look and feel of the toolkit. Style guides, which were described in the discussion on guidelines , give additional hints to a programmer on how to preserve the look and feel of a given toolkit beyond that which is enforced by the default definition of the interaction objects. Programmers can customize the behavior and appearance of interaction objects by setting instance attributes before compiling the application. Some windowing systems also allow for adjustments to these attributes without needing to recompile the program, using resources that can modify attribute values before the program runs. However, this flexibility is usually restricted to a limited set of attributes for efficiency. User interface management systems Despite the benefits of toolkits in simplifying interactive system development, they have limitations, including a restricted range of interaction objects and difficulty in use, even for experienced programmers. This highlights the need for additional support in designing and implementing interactive systems. User Interface Management Systems (UIMS) are designed to address these challenges by providing a higher level of service beyond toolkits. UIMS focuses on: 1. Conceptual Architecture: Separating application logic from presentation. 2. Implementation Techniques: Ensuring a clear connection between application and presentation while managing them separately. 3. Support Techniques: Helping with the management, implementation, and evaluation of interactive environments during runtime. Some argue that the term UIMS may not fully capture the scope of these systems, suggesting "User Interface Development Systems" (UIDS) instead, to emphasize tools that support design activities before runtime management. UIMS as a conceptual architecture The research in interactive systems highlights a key issue: separating the application's semantics from the user interface. This separation is beneficial for several reasons: 1. Portability: It allows the same application to be used on different systems by keeping development separate from device-specific interfaces. 2. Reusability: It enhances the potential for reusing components, which can reduce development costs. 3. Multiple Interfaces: It enables the creation of various interfaces for the same application functionality, improving interactive flexibility. 4. Customization: It allows both designers and users to customize the interface to enhance effectiveness without modifying the core application. To manage the interaction between the application and its presentation, dialog control is crucial. This control involves three main components: the application, the presentation, and the dialog control itself. Implementing this separation can be challenging. UIMS as a conceptual architecture There are two primary approaches to dialog control:  Read–Evaluation Loop: The application manages dialog control internally, calling interface procedures as needed for input or output.  Notification-Based Programming: Dialog control is external to the application. User actions trigger notifications that invoke the appropriate application procedures. Most User Interface Management Systems (UIMS) use notification-based programming to better separate presentation from application logic, though not all employ toolkits for this purpose. The Seeheim model The development system known as Newman’s Reaction Handler, introduced in 1968, was the first to support the separation between application and presentation. This concept evolved over time, with the term "User Interface Management System" (UIMS) being coined by Kasik in 1982, following initial research into enhancing human-computer interaction through graphical input.The first comprehensive conceptual architecture of a UIMS was outlined in a 1985 workshop in Seeheim, Germany. The Seeheim model identified three main components: 1. Presentation: Manages the appearance of the interface, including input and output options available to the user. 2. Dialog Control: Regulates the communication between the presentation and the application. 3. Application Interface: Represents the application's semantics as viewed through the interface. The Seeheim model was designed to focus on UIMS components rather than the entire interactive system, so it does not explicitly include the application or user. This model aligns well with the classic layers of a computer system—lexical, syntactic, and semantic—known from compiler design. The Seeheim model of the logical components of a UIMS The Seeheim model, while effective for explaining the development of UIMS up to 1985, lacked guidance for structuring future UIMS. A notable issue is the inclusion of an additional component (a case in point can be seen in the inclusion of the lowest box in the figure below)intended to allow for bypassing explicit dialog control for efficiency, which was not necessary for a conceptual architecture. This inclusion reflects a failure to distinguish between logical design and implementation concerns, leading to confusion in how UIMS should be structured moving forward. The Seeheim model of the logical components of a UIMS Lexical Syntactic Semantic Application Dialogue USER Presentation Interface APPLICATION Control Model direct communication rapid semantic between application and presentation feedback but regulated by dialogue control The model–view–controller triad in Smalltalk The Seeheim model does not address how to construct large, complex interactive systems from smaller components. This gap has been addressed by other conceptual architectures, such as the Model-View-Controller (MVC) paradigm introduced in the Smalltalk programming environment. MVC facilitates building interactive systems by separating concerns into three components: 1. Model: Represents application semantics. 2. View: Manages graphical and textual output. 3. Controller: Handles user input. In Smalltalk, these components are implemented as general object classes, allowing them to be inherited and modified as needed. The MVC approach supports modularity, enabling multiple views and controllers to be linked to a single model, thereby allowing various input-output techniques to represent the same application semantics. This approach helps in constructing new interactive systems by reusing and adapting existing components. The model–view–controller triad in Smalltalk Presentation–Abstraction–Control PAC Model The Presentation-Abstraction-Control (PAC) model, proposed by Coutaz, is another multi-agent architecture for interactive systems. PAC organizes components into three triads: 1. Abstraction: Represents the application semantics. 2. Presentation: Manages both input and output. 3. Control: Oversees dialog and maintains consistency between abstraction and presentation. PAC differs from the Model-View-Controller (MVC) paradigm in several key ways:  Input and Output: PAC combines input and output into a single presentation component, while MVC separates them.  Consistency Management: PAC includes an explicit control component dedicated to ensuring consistency between abstraction and presentation, whereas MVC leaves this responsibility to the programmer or designer.  Implementation Independence: PAC is not tied to any specific programming environment, making it more of a conceptual framework. This flexibility allows PAC to isolate the control component more easily compared to MVC, which is more closely associated with object-oriented programming environments Presentation–Abstraction–Control PAC Model Implementation considerations A conceptual architecture, such as the Seeheim model, is distinct from implementation details. To implement a conceptual architecture, one must determine how its components —presentation, dialog controller, and application interface—are realized. For instance, in graphical user interfaces, window systems and toolkits can separate application logic from presentation. Callback procedures are a common method for implementing the application interface; in the X toolkit, these callbacks are directional, requiring the application to register with the notifier. In the Model-View-Controller (MVC) pattern, callbacks are also used for communication between views/controllers and models, but here, the view/controller registers with the model. Communication between these components occurs through method calls typical in object-oriented programming, but neither method specifically addresses the management of dialog components. Implementation considerations Techniques used in dialog modeling within User Interface Management Systems (UIMS):  Menu Networks: Dialogs are modeled as networks of menus and submenus, where each menu represents possible user inputs and transitions to other menus or actions. Menus can be graphical, using buttons or other interactive elements.  Grammar Notations: Dialogs are described using formal grammars like BNF (Backus-Naur Form), suitable for command-based interfaces. However, they struggle with modeling interaction directionality and semantic feedback.  State Transition Diagrams: These provide a graphical representation of dialog events, but they have limitations in linking these events with application or presentation events and representing communication between them. Implementation considerations  Event Languages: Similar to grammar notations but can express directionality and support semantic feedback. They describe localized input-output behavior but can complicate the overall flow of the dialog.  Declarative Languages: Focus on describing the relationship between application and presentation by modeling shared databases rather than event sequences. They specify the desired outcomes of interactions rather than the sequence of events.  Constraints: A subset of declarative languages, constraints explicitly define dependencies between presentation and application values. Techniques like the ALV (Abstraction-Link-View) model use constraints to describe dialog controllers independently of presentation and application.  Graphical Specification: Involves directly programming the dialog in terms of the graphical interface, which can make the process accessible to non-programmers and integrate the dialog specification with actual user interface elements. Each technique has its strengths and weaknesses, particularly in how they handle communication and interaction between application and presentation components. Elements of Windowing Systems In earlier we have discussed the elements of the WIMP interface but only for how they enhance the interaction with the end-user. Here we will describe more details of the windowing systems used to build the WIMP interface. A key feature of a windowing system is its ability to abstract away the details of different hardware devices, such as screens, keyboards, and mouse. This abstraction allows programmers to write applications that can run on various devices without needing to adjust for the specific commands or data formats of each device. Instead, they can use a generic language understood by an abstract terminal, which translates their commands into the appropriate format for each device. This not only simplifies programming but also enhances the portability of applications, as only one device driver is needed for each hardware type. The windowing system employs a fixed generic language, known as its imaging model, to describe images. This model can handle a wide range of visual content. For efficiency, specific primitives are used to manage text images, either as detailed pixel images or through more general font definitions. Elements of windowing systems In the WIMP (Windows, Icons, Menus, Pointer) interface paradigm, windowing systems enable multiple tasks to run concurrently by managing several abstract terminals on a single hardware setup. Each terminal acts as an independent process, and the windowing system coordinates these processes, allowing applications to be developed as if they were isolated. The system also manages the display by allocating separate screen regions to each active terminal and resolving conflicts when regions overlap. In summary, a windowing system plays two key roles: it abstracts the specifics of different hardware devices for programming and manages multiple, independent applications running simultaneously(depicted in the figure below ). Next, we'll explore the various architectures of windowing systems designed to accomplish these functions. The roles of a windowing system

Use Quizgecko on...
Browser
Browser