Amy Chard

BSc(Hons) student in 2010.

Project: S.M.L.XL

Supervisor: JamesNoble and StuartMarshall


Multi-touch screen devices are now available in a large range of resolutions and form factors - from the Small (iPhone, iTouch), Medium (multitouchpads on laptops), Large (desktops with multitouchpads, touch mice; multitouch tables or screens). Xtra-Large (wonderwall Optiportal with gesture recognisers). All of these devices run their own UI software, have their own input gestures, and cannot easily share information. Small and Medium devices are private and single-user; Large and XtraLarge devices are public and can have multiple users. This project will design and build a single environment for all these devices, from the Small to the Xtralarge, that supports common user interface gestures and allows information to be freely moved between different displays and different users (you know, like on Avatar).

Third Week of Second Trimester

26th July - 1st August

I have an objective-c client which can connect and communicate with the java server which allows for images to be moved in a running desktop client via the iphone and vice versa. Images can be be resized and re-located I have also just added in functionality for text boxes for the objective-c version although it isn't fully functional yet (at least not to the same degree of functionality that the java client has). This week I am working on the ability to transfer files to and from the server. The objective-c client is implementing the Global View design rather than the Neighbour View approach.

Week 7

25th - 2nd May

Both this week and last week I've been working on the implementation. I have created a server and client, the clients can seemlessly move a single picture between each other. Currently they are set up in relation to each other - as a screen is either to the right or left of the current screen. There is a current discussion as to weather the model should be a bunch of screens connected in relation to each other or weather there should be one global view that each of the screens can move through. My aim is to implement this other model as well as the model I currently have to determine which is best to proceed with.

First week of the holidays:

April 3rd - April 10th

I did some research.. there wasn't anything much I found that was exciting.

I started work on coding. I went with what James said and started out by trying to do it in java. So far I've set up a server that sits there and waits for clients to connect to it. Once a client connects the server sends a file (a JPG at the moment) and then that client displays the picture in a gui that allows the user to move it around. I haven't made it so the clients are coordinated via the server yet - at the moment they each display the picture individually un-aware of what each other are doing. Although I have written some code as a start in trying to do that.