<<

MIT 150 | Project Athena - Users and Developers Conference, Day 1 [1/4] 1/14/1987

[MUSIC PLAYING]

SCHEIFLER: Good morning. My name is Bob Scheifler. And on behalf of MIT, the Lab for Computer Science, the MIT X Consortium, Project Athena, and other sundry people, I welcome you to the Bahamas.

Registered attendance for this conference was in excess of 1,000. So a few people probably are frozen somewhere in between wherever they are and here. We have a lot to get through, so I'm not really going to say much more. I'm going to ask that the speakers introduce themselves as they come up, so that I don't have to keep bopping up and down.

Due to technical difficulties, we're going to have a slight rearrangement of the morning agenda. So without further ado, we'll go into our first talk.

BOWBEER: Just a second here. Hi. I'm Joe Bowbeer from Apollo Computer. When you start up most window systems, they take over the whole screen. The sample server takes over the whole screen.

At Apollo, we're interested in integrating the X server, so that it shares the screen with other window systems. If x, y, and z are window systems, we'd like x, y, and z to coexist on the same screen. We'd like the user to be able to access x, y, and z windows on the same screen.

Let's see, first slide. One back, please.

PROJECTIONIST:How's that?

BOWBEER: Yeah, I have the master control here, I think. Sorry about this. There you see an example of the Apollo window system. When we shipped our first node in 1981, it came with a window system on it that was shipped to Harvard University. And I was still in school at the time.

And so our customers have had seven years to develop code, which uses and requires the Apollo window system. So the first reason we want to try to have X share the screen with other window systems is so that we can give our customers X windows without taking away their old window system.

The second reason is functionality. In this example, you see 3D graphics, in a window on the left, and Text Publishing System on the right. We used Interleaf, as you can tell, to do the slides. And 3D graphics, it's an example of something which is available in the Apollo window system, which won't be available in X for a while.

Interleaf is an example of a complex application, which may take some time to port to X. In general, window systems have their advantages. And we want to try to offer all the advantages at the same time. Another reason is extensibility by integrating the window systems together, we have a platform to which new window systems can be added in the future.

This is an X screen. It's rather sparsely windowed, because I'm leading up to something. We've got X clock in the corner and X term down in the bottom. Just a minute here. The on/off switch-- there you go.

Now the question is, what do you get when you put two window systems together? Here's one idea. If you integrate them at the video level, it's sort of x plus y over 2. I don't think special glasses will help with this one,

either. Here's an example. I meant to unobscure of the X windows. They're both unobscured. Sorry, I meant to obscure one of the X windows. They're both unobscured. And that's how you can tell this is a real slide and not a fake one. Because if I'd faked it, I would have obscured one of the windows.

Let's see, there's that. Did you see that? That's what you get.

This is one of those three part talks. I'm going to talk about the strategy for doing this. And then a little bit about the implementation, and then I'll conclude.

First of all, how are we going to integrate these window systems or how did we integrate the window systems? You can consider three options.

One is, if they were all server-based window systems, you could merge them at the protocol level, just merge the protocols together, make one big super server that does both of the things. Unfortunately, window systems are not all server-based.

There are procedure-based window systems. The Apollo window system is an example of this, where the window and graphics routines are in a library, and the application is called directly into the library.

So then we go on to number two. You can layer one window system on the other. In this case, you would layer the procedure-based system on the server-based system.

The problem here is, one, you're going to degrade performance in the procedure-based system, because you've introduced this additional layer. Two, you're probably going to have compatibility problems, since you've reimplemented most of your procedure-based system.

Three, there's also a problem with direct memory access to the screen. Procedure-based systems often let the applications directly manipulate the bits in the screen, and that's something that server-based systems can't do without some fooling. So we get to number three, which is to build a shared window system nucleus, build a window system nucleus that all the window systems use.

Before I get on to the implementation, let me talk a little bit more about what this means to have shared window systems. One point, by sharing the window system nucleus, all the window systems can access all the windows on the screen, even the ones they didn't create.

In this example, you see the x window is the root. There's a y window, one of the children, a z window, one of the children. And if you run an X window manager, for example, you'll be able to move and resize the x window and the y window.

One of the tasks in integrating a window system is that you need to fix it so it doesn't lose its mind if it sees windows that it didn't create. The other point about this is that, if you say, use the X window manager to change the size of the y window, then the y window system, because it's sharing the nucleus, will find out about those changes. And another thing you have to do when you integrate window systems is to fix it so it doesn't lose its mind if someone else changes one of its windows.

We give the window system that created the window squatter's rights on the window. That is the window system that created the root, being the X window system, gets to control the background pattern and the cursor shape and the border width. You can run two window managers at once. But we don't think that's particularly useful. So we aren't trying to bend over backwards to make that work. We only support one window manager at a time.

You might also note that, if the integration is done correctly, then you could run two X servers on the same screen. If anyone knows why you'd want to do that, I'd like to know.

Now, in order to build it, the shared nucleus, you to find out what things have to be shared. And here's a list of the things we have thought of. You need to share the window database, so that every window system can access all the windows and find out about changes to the windows they care about.

You need to share the input. You need to share screen resources, the graphics controllers, the screen memory, the color lookup tables, the screensaver, and the mechanism that turns off your video if you haven't talked to the screen for a while. That all needs to be shared.

Also selections, for example, we'd want to be able to cut and paste between windows of different window systems. I'll talk more about the window database and input in a minute.

As far as screen resources, the graphics controllers and screen memory, you just need some sort of fair dispatch mechanism that will give every window system a chance. The color lookup table, we're implementing a superset of the X color model and having X use that.

Here's what we need to share in the window database, the position and size of the windows, so that one window system can change the position and size of another window and that will all work. The window clip region, an x window can obscure a y window, and that needs to be handled at the clip region/stage.

Parents, siblings and children-- you can have an x window as a parent of a y window and vice versa. The class, input-output or input-only, this determines how the clip region is computed. And the status information, such as these flags, such as exposed, obscured, match, also an indication of whether the window has been changed recently. For example, a flag that tells you whether the position has changed.

Note that there is no border width in the window database. Every window system needs to augment the shared database with its own special needs. And the border width is something that X would handle by itself.

We also need to share properties. We define a superset of the X properties. And every window system implements its properties using that superset, so that, for example, a client of one window system can communicate with the window manager of another window system via the shared properties.

We also need a global ID, and that's the handle by which a window is passed from one window system to another. We intend to supply Apollo extensions, where you can pass in a global ID and get the X window ID back and pass in an X window ID and get the global ID back.

Also, a few special features of the database. The window systems can use the status information to find out about changes to the windows they care about. For example, they will notify interested clients about the changes, clean-up their own act, and that sort of thing.

This kind of notification is after the fact. We have a couple of features that give us warning as the action is happening or before. The policy groups, a window can be associated with any number of policy groups. And when the window is being operated on, the policy group gets a chance to modify the operation. That's used to implement redirection in X. The Substructure redirect gets one group and resize redirect gets another group.

Also, a window system can select advanced damage warning on a window, and then, before the window is damaged, just before then, the window system is called back and given a chance to save the bits that's needed for backing store in X.

Here's our model for input, for the shared input. Events enter from the devices and are serialized and assigned timestamps. Then they may be translated, discarded, turned into different events. Or more complex things might happen. Then they are demultiplexed or routed to the interested clients. Each window system would have a queue that it route its events to.

The routing is done with focus trees. This isn't nearly complete, but it's still too complicated. What was I trying to show? We have a slightly more general model of input than X. The focus trees and the windows are not one to one, necessarily. This has two x windows and one y window.

And there's a separate focus tree. These are windows, here. Can you see this? Good. These are windows here. And these are--

AUDIENCE: [INAUDIBLE]

BOWBEER: Say again? Sorry. OK. These are focus trees here. We have a separate focus tree for the pointer and for the keyboard, so that they can have separate focuses. In this example, I think I'm showing that the pointer is in window x2, so you activate the focal path associated with that. That would be the kind of routing where the window that contains the pointer is the window that gets the event.

And bindings are this sort of generalized translation thing, that we have, which can just do simple translation, such as mapping in X. And they can also do something more complicated. For example, the passive grab, you can implement a binding, which, when it sees a certain button and modifier combination, will activate the grab.

How much of this is real and how much is imaginary? As you saw from the slide there, a lot of it's real. We've implemented the core of the window system nucleus. It depends on other Apollo software, so it's not a portable implementation.

Actually, I think it was started before we knew about X, which might explain why it's not portable. But we've modified the Apollo window system so that it uses the shared nucleus. We've modified the X server, so that it uses the shared nucleus. Note that all we're doing is changing the server implementation in order to do this.

We'd like to see our changes on some future X release. And this is a picture of how those changes might come about. We propose to introduce a new directory, called WDX, for Window Dependent X. And in that directory, there's the WI directory, which is the Window Independent X, which contains the same old code from the server, now.

And then there's the Apollo directory, which contains our Apollo specific version. And we would like to see a more portable, shared version implemented in the future, which is what the question marks are for. I'll conclude with two statements and two questions. It appears that old window systems never die. We're trying to find a place on our screen for one such system. And we're also trying to find places in our screen for new window systems.

The window system nucleus is never done. I heard that. We don't claim that xyz, come what will, the window system nucleus solves all the problems. But we've modified it not just for X, that we know we'll have to modify it again, but, on the other hand, it's almost done.

The first question, will users be able to handle this complexity. With all the editors and window managers we have today on one window system, alone, now we add more window systems, and it can't help but increase the complexity. We wonder whether users can make sense of this.

And the last question, what kind of hybrids will develop? If users start making sense of these multiple window systems, what sorts-- [AUDIO OUT] Several large chunks or each chunk, for example, uses a different window system just because of the natural evolution of software.

Well, that's it. Well, anyway. I think you're supposed to ask questions now. Oh, sorry, while you're asking questions, I'll put on my last slide.

SCHEIFLER: It would be useful if people with questions could go to a mic.

AUDIENCE: I have a loud voice, so I think it will be OK. You say that the system will use [? a typical ?] window manager for the entire thing. Do you expect to be able to support a general X window manager or we need a special proprietary window manager that has a lot of detailed information about how the Apollo works?

BOWBEER: No. The intent is that all clients will work as if they've got the whole screen to themselves and the whole window system to themselves. And then it clears the window managers. We intend that WM and UWM will work.

AUDIENCE: You talked about layering the Apollo window system on top of X and that being sort of inefficient. What about layering X in one Apollo window, which is procedure-based. It may not be that inefficient.

BOWBEER: But you don't get the network transparency that way. The procedure-based system is a local. There are no remote procedure calls or anything. So you're tied to running X on the same system as the server in that case. So you'd have to do it the other way in order to get something that was network transparent. Hello. Yes?

AUDIENCE: Does your implementation allow you to use the Apollo window system and then export through X to a remote [INAUDIBLE] host or remote display [INAUDIBLE] a new translation that would allow you to go through X to get off the local ?

BOWBEER: Say again?

AUDIENCE: I'm wondering if your window system, the way you have it set up here--

BOWBEER: You mean the shared version?

AUDIENCE: --[INAUDIBLE] in your superset that would allow [INAUDIBLE] window-based application to translate into X and use a display other than the local host? BOWBEER: I mean, the only way that could happen is-- I think I understand your question but possibly not. The only way that could happen is if the Apollo application used both X and the Apollo window. I mean it would have to use X in order to do anything remote. So you'd have to modify the application in order to do anything remote.

SCHEIFLER: We would appreciate if people do go to the mics, or it won't come out on the videotape. It's desirable to make the effort to walk over.

BOWBEER: OK? Thanks.

[APPLAUSE]

PROJECTIONIST:It's working. It's working. It's working.

SCHEIFLER: Great.

PROJECTIONIST:Thank you. I'm sorry.

GEROVAC: OK, we're back. We're going to be talking about the experiences that we had in porting X to a high performance workstation that included 3D graphics. There are three of us talking. I'll be talking about some of the background and the basic implementation environment in the system.

We began this quite a while ago. And actually, before X11 began happening, we were looking at what some of the considerations were for doing 3D coordinated with bitmap graphics in a windowing system. In April, when the X11 design began, we were also looking, then, very much at the issues involved in coordinating 3D graphics in X.

At the same time, we were working on the 3D graphics interface. Our initial definition of that came out a little bit after the end of the public review on X11. Also, at about that same time, a Phigs+ committee started up to look at adding shading and lighting, curves and surfaces to Phigs. Phigs is a ANSI almost-standard for 3D graphics. And then, a year ago was the first X conference.

In performing this work, we were working with prereleases of the server. We had our first functional server, without the 3D extensions running, in December of '86. And we had 3D running without X on the same device at about the same time.

And then it took us another few months to get those coordinated. At that time, we were using a bypass mode. And we still are for the 3D, where the 3D graphics was coordinated with X. But the actual 3D requests don't go through an X protocol.

Then, about six months ago, we started a public 3D extension definition, which has been going on since then. And that's now called X3D-PEX. There will be a presentation, tomorrow morning, which will go into more detail on PEX.

There are a few observations for those list of dates. One of them was that all of the components that we were working with were moving targets. That included the hardware, all the components of X, and the 3D interfaces that we were working with. Another aspect of that is that all of these activities were happening geographically distributed across the country. So a lot of times, we couldn't just walk next door to work issues. And all of these activities were happening in parallel. So everyone was looking at multiple mailings all the time.

The hardware was frozen before any of the interfaces were, really. And there were some things that we could do in microcode later on in the project. And all of the software that was developed, except for some low-level specific stuff, is a source-compatible across VMS and Ultrix. Ultrix is Digital's Berkeley product.

And we wanted to do more, but, given the available resources and that we wanted to have this in a timely fashion, we took some of the things that we wanted to do and backed off a little bit from them.

The environment that we were working with. We started with the standard public releases of the X11 server, reimplemented much of DDX, and used the extension mechanism to add the additional functions that we wanted. We extended X to support a highly interactive 3D graphics interface.

The 3D interface is a peer interface with bitmap graphics. And they can be used simultaneously by clients. And we had a performance emphasis. We were trying to make sure that the window system did not incur any performance hits on the overall system.

The 3D interface that we implemented, it has Phigs and Phigs+ like functionality. It is lower level than Phigs. Phigs is layered on top of it.

And it is currently a structure mode interface. Originally, we were looking at doing an immediate mode interface, as well. We went through the designs on that and didn't do that, again, due to time. Regular X is an immediate mode style of interface. In structure mode, you create a graphics data structure in the space of the server that describes the object that you want to print or display.

The hardware that we were working with had a lot of support for graphics context. It could provide asynchronous execution threads. But that was another thing that we looked at, did some initial design and didn't complete the implementation on. And there are separate bitmap graphics and 3D graphics contexts.

The hardware also supports windowing and visuals with window clipping to arbitrary regions, a pixel format, multiple color maps. And then, also, tightly coupled 3D interaction, so that input events can change display directly without having to loop back.

This is a very simple picture of the hardware model. It is a multiprocessor system that communicates through shared memory. The processors are heterogeneous. The server, running in the host processor, creates and modifies data structures in the shared memory.

The 3D graphics processor and bitmap graphics processor read the data asynchronously out of the shared memory and perform their commands. The shared memory is used for the 3D graphics data structures and command queues for the bitmap graphics.

This is a simple diagram of the software model. Everything above there is standard X as it's released. What was done is that the DDX level and below was replaced and the 3D extension was added.

Each of the components, the requests coming from the applications get dispatched to the appropriate component. And then those create the data structures in shared memory. We implemented a bypass mode that bypasses the protocol and gives the application library direct access to the shared memory, without going through the transport protocol. And we then, again, didn't complete the entire 3D protocol for this.

The next person that will talk will be Dave Carver.

CARVER: Can everybody hear me OK? Sounds OK. I'm Dave Carver. And I'm going to speak about our adaptation of the core X server to our favorite piece of hardware. My focus will be primarily on display management, graphics contexts, and a little bit further into the execution model of X.

Our hardware, as Branko mentioned, provides a multiple window, multiple format environment. The framebuffer can be viewed in many different ways by the video backend. And in our implementation, we chose to provide 1 plane, 8 plane, and 24 plane window capability. That comes out in multiple window depths.

The visual classes, correspondingly, are the PseudoColor and DirectColor. We did have the choice of doing 4 and 12 plane windows, as well, but we deferred that, primarily because of schedule and resources.

The way we chose to present this environment was to pretend as though our workstation was a one plane window system or monochrome system and provide 8 and 24 planes upon request. This seemed to make sense, primarily because of the early-on development. Most applications in development was made on a monochrome system.

The color map environment, our hardware does provide multiple hardware color tables. And into those, we can install simultaneously multiple color maps. One choice we did make was too reserve one of the hardware color tables for system use to provide StaticGrey and TrueColor color maps.

In our environment, that made sense, because the vast majority of the 24 plane windows, DirectColor windows or TrueColor windows, needed a particular kind of color map. We chose a gamma-corrected color map for the TrueColor.

Three things that we sort of found that we needed, as we went along, and probably are still needed are color map conventions and utilities. Currently, if installation does not happen properly or has to be de-installed, because there are too many color maps to be installed, there aren't the conventions and utilities provided to the applications to make that graceful. Window manager support would be very, very useful.

And one other, probably more of a nit than anything else. We found that, given that we did a default window of one plane, and many of our applications wound up creating 8 and 24 plane windows, it would have been useful to have a defaulting arrangement for color maps on a per visual basis.

One aspect of our hardware is that it has very little offscreen memory. Well, the memory we do have is organized for efficient drawing and not for allocation. So the small amount we have is very difficult to use for pixmap storage or font storage or things of that sort. Therefore, we wind up storing pixmaps and backing storage into system memory. Our hardware does not support or does not assist in system memory drawing. And as a fallout of that, you get a very wide variation in performance as you draw to it windows on the screen or pixmaps in system memory. The other fallout is that you wind up with a very high code complexity trying to cover all the cases of mixing system memory drawing with framebuffer drawing and, well, for instance, copy areas between the two.

The hardware supports windowing in fairly direct fashion. It provides a window location or translation on the screen as well as masking and pixel formatting. It also provides bitmap masking above windows to give a per pixel masking capability.

We chose to drive this capability primarily through the use of lazy evaluation of the hardware structures, so if a window manipulation occurs, then we would generally invalidate the state of the hardware, but we would not reconstruct the state for the windows until rendition into that particular window or a particular window was required.

The last note simply depicts that we chose to do a replacement of the state rather than sending down editing commands. That was generally because we didn't have sufficient memory management primitives in our hardware.

The picture depicts, primarily, what we do. From an X window we construct a display context as we call it. And that represents a location and clipping arrangement on the screen. From that, we enable that for either bitmap graphics and, as you'll see later on, for 3D graphics, as well. But the enabling, again, is lazily bound. It doesn't occur unless either of those kinds of drawing are actually performed.

In terms of window manipulation, two of the things we wanted to try to do were to provide overlays and transparencies. The arrangement of our framebuffer provided for allowing separate areas of the framebuffer being used for, let's say, the one plane window, then for the 24 plane window.

In that situation, it makes sense to try to not disturb the drawing into the 24 plane window by the one plane window. That would be the sense of the overlay. The transparency, again, would be to let some amount of the one plane window show above the 24 plane window.

And we ran into a number of stumbling blocks in attempting to do this. Number one is that the DDX interface provides basically two entry points for screen update. There's a copy window function and a paint window function.

As a side note, there is no central entry point for overall update. Now, there is a central entry point for database projection. And that procedure is called the miValidateTree. The procedure is an example of that that's provided with the sample server.

Another thing that's done quite commonly in the server currently is that a given windowing manipulation will be composed of more than one of the more primitive ones. Moving might be an unmap, a reconfiguration of the database, and then a remap.

DDX doesn't provide any kind of a transaction or composting primitive so that you can coordinate all of that activity at the device level. And those are all sort of the things you need to do in order to provide the appropriate support for overlays and transparencies. Now, an argument could be made that you can sort of get rid of the update procedures, copy and paint, and merely substitute the validation procedure with whatever you want to be done. That's possible. And we really didn't go too far in investigating that. It probably required more work than we had time for.

Next, I'll discuss graphics context management. Early in the project, our early design considerations-- and this really predates X version 11 by at least a few months. We started out by desiring to have extensive drawing state. We anticipated the need for having multiple threads provided in the hardware.

At that time, we deemed it would probably be wise to bind the hardware state with the thread. And another thing we tried to do, at that time, or anticipated we wanted to do is give direct access to the hardware not only, as we mentioned, for the 3D component, but also for the bitmap graphic or the core components.

As X started to evolve or X11 started to evolve, we had a few expectations. And we tried to program or design for the flexibility to adapt to whatever X became. We noticed or observed that X10 was completely state free and also expected X11 to be much more state driven and probably quite a bit more complex.

We did expect, however, some reduced state operators. Rather than being fully state-driven operators, X10 would become state free operator environment. I think that sort of indicates that we're looking for some sort of compromise between X11 and X10 at that time.

Also, we anticipated that context editing was going to be a frequent operation and would generally be much cheaper than doing context switching.

Well, not all of those assumptions and considerations panned out too well. Well, one of the areas where we didn't make a wise choice was to bind together the state block and the thread of execution.

Primarily, state blocks were switched frequently and at an application control. And it made sense to not have to switch threads of execution at the same time, meaning that you'd have to do a full context switch in execution, as well. So we adjusted the microcode to provide dynamic state block transitions in a single thread of execution.

We backed off, pretty much entirely, from direct access to the hardware for core operations. The reason for that was because we primarily had to implement DDX directly by the hardware or something very akin to that in order to be able to do that. And since our hardware was a little bit lower level, we didn't have sufficient state to provide that kind of access.

We also had the opportunity to adjust the microcode to optimize the drawing state, drop out operators and state that weren't necessary for X. And we did optimize the context switching.

Next, I'll talk about the execution model. And originally, we had high expectations that we would be able to design a parallel execution model.

The goals involved with that were to try to read reduce context switching for throughput operations, also to provide priority scheduling and provide interactive windows, with a higher priority, and give background priority to long term rendering windows. Also, naturally, we wanted to be able to extend this design over a multiprocessor environment or configuration.

Shortly after we started that design, we ran into a few hitches, and we decided to step back and do a more thorough investigation of the X11 execution model. And a few things came to light that were quite surprising. Number one, it was the resource conflicts in the server that determine the amount of parallelism possible. Nothing else really was influencing that. The connection does not influence parallelism, other than operations on a single connection do have to perform as though they are serial.

The atomicity rule has a fairly important implication in that, you might say, well, if two requests are coming in on separate connections, and they collide on a resource, who cares how badly they misbehave? The atomicity rule says that you cannot let them collide in such a way that they look like they collided in parallel. One has happened before the other.

One question that came up at the time was that, gee, graphics contexts have lots of resources in them or can reference resources. And that turned out to be a no op, in the sense that those resources can be considered copy and reference. And you need not factor them in to the resource conflicts.

When we all summed up what we saw as being required, it turned out that we needed lightweight threads of execution, dynamic state block allocation. And the threads had to be something that could be created and destroyed very efficiently.

The result of this investigation was that we backed off to a serial execution model. The reason we did that is because the parallel execution model primarily required resource conflict management. It would impact the DDX and DIX partitioning. It probably would require language support and system support in order to accomplish that.

The fact that we backed off to a serial execution model aggravated the problems we had with context switching. We had no control over how many operations got executed in a particular time, in a particular thread. Therefore, applications that mixed a lot of context switching perform badly, even though we had a lot of hardware assist.