top of page

Rapid Prototyping

 

 

This page will be populated through June with bi-weekly posts about the projects I'll be completing in the in this HCDE rapid prototyping course. 

E1 - Paper Prototyping
Design:

Our task was to (very quickly) design a tablet app that can be used to (very quickly) place an order at the HCDE Studio Cafe. Then (very quickly) construct a paper prototype that can be used to (very quickly) test our design. Next we found someone who wants to (very quickly) place a coffee order with the Studio Cafe barista and you we (very quickly) ran a small user test with them to (very quickly) evaluate our design.

 

 

Prototype:

To overcome the challenge of rapidly changing stickynotes that usually comes with paper prototypes one of my group members came up with the ingenious idea to make an iPad cut out that we could move along multiple screens.  Each of the screens had a set of sticky notes that corresponded to the possible interactions for that screen. We chose to create it this way so that we weren't limited by a one-way scenario for our user tests. Our user really could fully interact with the device however they wanted. 

 

Analysis:

Upon out first user test we realized that we had forgotten an important component, a cancel drink interaction. Such a simple element, essential to UI. This error showcased the importance and usefulness of paper prototypes. We were able to quickly create a button and begin our user test again. This time we found our device to be a success! I'm sure, however, with additionally tests we could have found other, equally simple, solutions to better our overall design.

A1 - Wearable UI Paper Prototyping

Design:

I created a quick paper prototype of a smartwatch tool someone an use to locate their phone. The video to the left shows 3 interaction types:

  • Vibration

  • Melody

  • Flashlight

These 3 interactions can be turned on independently to simultaneously to help a user locate their phone. If they are somewhere private or they don't want to make too much noise,  they can activate the phone's vibration.  They can also activate a melody, of which they can adjust the volume.  Finally they can activate the flashlight on the phone, for times when melody or vibration aren't appropriate or aren't working. 

 

Never loose your phone again! 

 

Prototype:

Using paper and sticky notes I created an oversized watch device. Each of the possible interaction screens was put on its own sticky note that would be displayed to the user in the logical order. With each interaction (swiping gestures) I changed the sticky notes to reflect the screen change. And wizard of oz'ed the reaction on the "connected" phone. I modeled the gestures off watching my initial user (a Moto360 owner) use his device. I noticed the primary interaction styles were vertical and horizontal swipes and taps so these were the interactions paradigms I chose to use.

 

Analysis:

My first user, a smart watch owner, I started the test with very little instructions; I wanted to see what he would do with it. I instructed him to try to interact with the device and to my pleasant surprise he horizontally swiped! When I presented the new screen he interacted with the icon by clicking on it. I was pleasantly surprised with the success of my first user test. My 2nd user test was done in class the following day. This test was with a non-smartwatch-owner. It was then that I realized how blatantly obvious my lack of instruction or indication that there were multiple modes. Through discussion after my 2nd test I realized that the addition of arrows on the sides, or small circles at the bottom would have given indication to the user that there are additional screens/modes. Something as simple as this addition could encourage a swiping interaction. 

 

Iteration:
To show what I took away from my peer feedback I created the animated gif to the left which has circles at the bottom to indicate multiple screens or modes, as well as help text that could appear if the user hesitates on a screen, or upon the users first few times using the app. 

E2 - Model Prototyping

Design:

We were tasked to create a quick, low fidelity prototype of an “EcoATM”, an interactive kiosk for recycling your used electronic devices like smartphones for cash or other value. We were to redesign of one component with a goal of reducing the number of distinct steps and moving parts in the interaction sequence. We chose to attempt to reduce the number of interactions a user needed to complete the process.

 

Prototype:

We started with a cardboard box, to simulate a tabletop version of our "ATM" machine. We then created a tall back, for where our buttons and touch display would be and added a scanner area. In the scanner area we had one section for the user's ID and another area for the phone. The area designated for the phone had adjacent cords for the user to plug in their device, made with beads and pipe cleaners. This area also had a trap door; if the user chose to sell their device the trap door would open allowing the phone to be collected. The scanner door also had window area, so that they could see their ID and phone, and know they were items were safe. Our interactions were simple. Once the scan began, the machine would give the user a preliminary estimate for how much they would receive for their device. If the user didn't think the offer was worth completing they could cancel their transaction and retrieve their items. 

 

Analysis:

In our fist usability test we discovered a few basic interactions we needed to fix, our scanner drawer didn't have a handel, so the user wasnt aware that it 

opened. Additioally the instructions we had on the initial UI weren't clear. With a few really quick modifications our tool was much easier to understand. We added foam buttons to help our user realized we intended them the push the large buttons, and added a sequence of screen interactions to that the user would be presented with information asa time passed. We added the ability for the user to cancel the transaction mid scan if they felt the extimated value wasnt worth their time. Our final prototype acheived our goal of reducing the number of interactions; the initial design had over 10 interactions and our prototype had to around 3-5 depending on if the user chose to sell their device.

A2 - Model Prototyping 

Design:

Assigned to design a quick 3-D lo-fi prototype to demonstrate how they might apply their core competency in new ways. We could choose to prototype three different products I chose a handheld immersion blender with a variable speed control and digital display that senses when the contents have achieved a specific consistency.

Features include:

  • variable speed control – you decide the mechanism/interaction and whether it is fluid or distinct settings

  • digital readout of speed and content viscosity (monochromatic, non-touch sensitive)

  • can be used right or left-handed

  • product dimensions minimum of 12 inches long and 2.5 inches in diameter for motor compartment/grip portion of the device

  • product weight is between 1-1.5 pounds

 

Prototype:

I started with a wisk I had in my kitchen and added to the wand/handel area with paper and paper towel roles, which is where the majority of my UI would be. I added a digital display for the viscosity readout with stucky notes, I made an adjustable indicator of to show the consistancy and adjacent buttons to change these settings. When the consistancy was reached the digital display would notify the user and a the device would beep. For the speed adjustments I used a empty tiolet paper roll at the end of the handel to simulate a rotatable end and added red arrows with a (-) 

signs and green arrows with (+) signs indicated the direction to turn the end of the device.

 

Analysis:

From my user tests I was able to quickly modify my design and get instant user feedback. There were 2 desigs areas my user was unsatisfied with. The first was where I attempted to make a switch with the binder clip, but in my first user test didn't think to use the binder clip as a switch and mentioned it confused him. So on my second iteration of the design I implemented a similar switch like aparatus with a stucky note. The second suggestion I got from user feedback was with the initial placement of the buttons and the digital display were in the way of where he wanted to place his hand. On my 2nd iteratoin I quickly moved them lower on the device and rotated them to a landscape orientation instead of their initial vertical placement. Finally I got positive feedback of my speed dial. While not deminstrated int he video demo, the user told me that he felt the location and labeling were intuitive and easy to execute while using. He said it reminded him of a peper grinder.

A3: 2D Object Prototype

Design:

The purpose of this assignment was to gain experience with designing and building a 2D object using a laser cutter. Our challenge was to build a stand that will firmly support a smartphone for the purpose of shooting a video. This design began with some online research of other phone stands. I found a few inspiring options: one was a 3 sided box, with edges on the open face so the phone wouldn't fall, I chose against trying to emulate something like this because you couldn’t easily access the phone to initiate/pause video while the phone was in the stand because the screen of the phone was hidden inside the box. The other designs I found were made with very little material and often had 2 pieces that hinged together that that phone would prop against. I worried designs like these wouldn't be suitable for the material we were using for this project. After a few setches from the aformentioned design I began to look around my house for objects that could double as phone stands. I found a shelf I have from IKEA, it's "L" shape allowed for easy access to the display while not impeding the camera's view. Its shape seemed simple and with a few modifications it would work perfectly for task at hand. I choose to use this shelf as inspiration and began with a cardboard. 

 

Prototype:

I began by sketching out the pieces I'd need to cut. My initial plan was to create all the edged of the shelf, and hindge all the pieces together into a hollowed 3D shelf. Unsure how I'd hindge all my pieces together I decided to show my sketched ideas to a Mechanical Engineer. He advised my that I could create essentially the same item with more simplicity. We then began sketching together, he advised me on how to hinge the material so it could slip together and hold the weight a phone. After we sketched the design I 

began designing the artifact in Rhino, the 3D modeling software I'd use to print the design into the material later on. Before printing the design with the laser cutter I wanted to prototype in cardboard first. The cardboard allowed me to see that the proportions and hindges would work as I expected. This prototype worked exactly as I'd hoped so it was off to the laser printer!

 

 

Analysis:

The final printed item worked as I'd hoped, the phone could be placed in the device landscape or portrait, and the stand didn't impede the camera's view. It held the weight of the phone; all of the support pieces I'd printed made for a very sturdy and stable phone stand. On my first user test after printing, they mentioned that the stand wasn't adjustable in that it doesn't allow for the camera angel to change. When the phone was on the stand the view the camera was good to shoot items straight ahead of it, but the phone couldn't focus or capture things on the table. I realized that in my design I was focused on stability and desiging an item that could hold the phone without impeding the camera or toppling over. But when I'd placed the phone on the stand I never viewed what the phone's camera would see. If we'd been able to do another iteration on the design I would have liked to make it so the tool was reversible, where the camera could be placed on the shorter ledge and would allow for the change of the camera angel. Currently if the phone is placed this way the stand falls. I could have added more weight or another peice that would have prevented the stand from falling from the weight of the phone. These changes would have added versatility  allowed for the user to more easily record both things straight in front of the phone and things on the table. 

 

A4: 3D Object Prototype

Design:

The purpose of this assignment was to develop our ability to create 3D models for prototyping physical objects at medium fidelity. We could choose to create whatever we wanted to model, but our design must use at least one of all of the following primitive operations:

  • Extrusion

  • Revolution

  • Boolean (adding or subtracting one object from another)

My design utilized both extrusion and revolution. I used Solid Works to make my model. A friend who was familiar with 3D printing and Solid Works helped me with my design. He helped my maintain at least a 45 degree angel on some of the sharp corners so that the edeges of my print wouldn't fall. The 3D printed model turned out very good, none of the edges fell so I was very grateful for his advice. 

 

E5: Video RAPID Prototype

Design:

This in class assignment was to quickly create a video prototype for the One Bus Away mobile is to highlight the scenario and use cases for the application, not necessarily the interface itself. We had 90 minutes, to sketch a storyboard our video, shoot the video, edit it and upload it to be shared with the class.

 

Prototype:

My team began sketching a scenario we decided to each sketch our favorite aspect of the app, so we could consolidate our ideas and begin shooting the video. We decided to for-go any dialog, we didnt have time to write or reherse it, so we decided to remove all the sound and overlay a simple song. Once we decided on the scenario we were going to use we went out and began shooting. We took multiple video takes of the same scened so that we'd be able to choose when we were ready to edit. Once we finished shooting we put the clips in, added a few subtitles and we were done!

 

 

Analysis:

We moved fast, and were one of the few teams that was able to complete their video within the short time frame. The end product video was simple and got the point across! The added subtitles did a great job adding to the the point we were trying to convey with our prototype/product demo; without those subtitles a viewer may not have understood what we were trying to convey.  We wanted to express that by using this application you don't have to wonder when the next bus is coming! 

A5: Video Prototype

Design:

The subject of this assignment was to video prototype that demonstrates clearly and effectively (and creatively) the functionality of a product. I first wanted to explore a new form of transprotation in the seattle area, Car2Go, unfortunately it takes about a week to establish an account so that idea was scratched. I then chose to further our previous assignments exploration of One Bus away. The assignment was to think of this as a concise product pitch that you might use in seeking an investor for your design, which does not yet exist in more than concept or rough mockup. My favorite feature of One Bus Away is that you don't waste time sitting at the stop, wondering when the bus is coming, but instead you can catch a bus whenever you want, and be confident that one is on its way. The mapping feature allows you to see the buses route so you know where youre headed, and when to get off!   

 

Prototype:

I began sketching 2 scenarios that I thought demonstrated my favotire apsect of the applicaton. The first sketch has 3 simultaneous stories, one  (Red) where someone is waiting .. and waiting ... and waiting at the stop. the second story, is the ideal use case (Purple) someone is doing a task, their phone vibrates to remind them their bus is coming, the walk to the bus where she meets up with the first user (Red) and she board within moments, no time wasted and no rush. The third (green) is someone who wasnt paying attention, lost track of time had to run to try to catch the bus and ultimately missed his bus. I chose against this scenario mostly because I didn't have 3 individuals to use for my video. The second scenario I chose to film. I had a few titles in mind that illustrated why I use and enjoy One Bus Away.  
 

 

 

This app enables users to: spend time  where you want ,with the ones they want, doing what you want; instead of waiting at a bus stop. With that in mind and my sketched scenario, I took a bus trip into seattle with my boyfriend (the aformentioned Mechanical Engineer and solid works expert) and I captured moments of our trip. We got off the bus near pike place but then enjoyed the day wondering around Seattle. We enjoyed capturing iconic views of the city and placing the camera in places to record us together. When we were ready to head home, we searched along the route we took from our house to see which stop we were currently closest. We didn't have to pay for parking, we didn't have to walk back to the car, and we spent time together, where we wanted!

 

Analysis:

This prototype was a lot of fun to shoot. I captured alot of footage so that when we got home I'd have enough to edit. The captions fit in when some of the scenes I shot and we had a wonderful day in the city togeher! My boyfriend wasn't to familiar with the tool so introducing it to him and showing him the app's capabilities was fun. He found the tool to be really useful and we've agreed to bus into the city more often. In additoin we're still waiting for our Car2go membership card - and we're eager to explore our city using this form of transportation as well! We <3 Seattle!  

A6: Wizard Of Oz (Behavioral) Prototyping

Design:

This assignment gave us the opportunity to try our Behavioral Prototyping, also playfully called "Wizard of Oz Prototyping." This technique can be very effective in testing design assumptions for HCI applications when the actual technology is either not available or too expensive to develop during the design phase of a project. We chose to make a wearable posture assistant that would help user's learn a yoga position through audio cues. Our participant would be our Dorothy, the operator would be our Wizard - and our objective would be for our prototype to be so seamless and responsive that our "Dorothy" would thing the "Wizard" was real!

 

Test Plan #1:

We booked 2 study rooms adjacent to each other. We had a fanny-pack, we were going to weigh it down with a few weights to mimic an Aurdiuno board and sensors. We were also going to place a cellphone in the pack to relay haptic and audio feedback to the users. We were going to use 3 laptops - two would be used to Skpye with each other so the Wizard could see Dorothy in the other room, and the third would be used to send the audio and haptic feedback to prompt the user. We also hooked a Kinect up to the computer to aid in the perception that the user's posture is being captured.

 

Test Plan #2:

When we were setting up for our first pilot test, the cellphone plan fell flat. The audio quality was very poor and the haptic feedback was far to delayed so we decided to scrap the cell phone idea. We brainstormed, and quickly decided to try again the next day with a Bluetooth audio speaker.  (This plan worked!)

Prototype:

We built a website (shown the in the slideshow) that the user would interact with to choose a workout which initiated the system and enabled the Kinect to "track" the user's posture. Once the "system was initiated" I (the wizard) began acting as the system and prompting audio instructions to the user to walk them through the exercise. The audio cues were pre-recorded with a computerized voice to increase the believability factor. We recorded many bits to play for the user, step-by-step instructions, affirmations, and adjustments. One of the instructional bits was intentionally recorded in effort to trick the user so that our system could correct them. During the test our the user followed the prompts as they were given, chuckled as the system told her "Good Job" and successfully completed the Yoga pose. After the user test we asked her to evaluate the system. She stated that the system was well programmed and thought it was a nice alternative to having to watch a workout video. She did state though that because English wasn't her first language that some of the anatomical terms were a challenge for her to understand. 

 

Analysis:

Our test was a success. Dorothy believed our wizard prototype! When we revealed it to her that I was opporating the vioce commands she was shocked and ammused!  Our design goals were to see how detailed audio instructions needed to be for users to be able to complete and perfect workouts and postures. We also wanted to understand how willing are users to listen to directions given by a machine. Our tool was a success becasue we found that our user enjoyed the audio directions. In additoin she was able to fully complete the the Yoga position. The purpose of this kind of prototyping is to identify ways to improve a system before it proceeds to a costly development phase. From ur user test we learned that the kinds of prompts given were clear and understandable but the languageused may have needed a few more alternative. For example one step was "Turn your torso torwards the front" But our user didn't know the term torso, so she asked the system "What?" If the system were to be developeda cue like this from the user should promt an alternative instruction to help the user better understand the step. identifying small things like this to adjust early in the design phase save teams time and money.

A7: Website Prototype

Design:

Assigned with the task to re-design the DUB's website. The current site isn't well organized and the seminars, one of the items the organization wants to promote the most are not easily found. Additionally finding recordings of past seminars is even harder to find.  We were asked to redesign the page with these key areas in mind. 

  • Announcements: blog or other listing of news and announcements of interest to the community.

  • Directory: listing of faculty, students, affiliates, etc. who are members of the dub community.

  • Calendar: a calendar system for viewing and subscribing to a schedule of dub events.

  • Seminar: information about the weekly seminar series — schedule, presenters, abstracts, videos, etc.

  • Research: faculty research areas, projects, publications, collaborators, etc.

  • Membership: a members section for dub members (login required) to edit their own information on the site (profile, research, etc.).

Prototype:

I sketched ideas for the page design first to begin thinking about ways to layout the aformentioned page highlights. I first decided I wanted a horizontal navigation bar to be at the top of the page. This would allow users to easily navigate the the key areas of the website. I also wanted to enable user's to search thought projects/seminars/directory which are published within the DUBs community. After sketching I began to build a lo-fi Axue prototype. Keeping it at a low fidelity, black and white prototpye would allow me to show it to users and they'd be able to focus on the content and its organization versus having their critiques be about color or font choices. 

 

AXURE LINK CLICK HERE

 

Anaysis:

I got a lot of positive feedback from my users about the organization of the content on the page. They liked the "card" approach for the design of the blog posts and felt that it would be a good way to give users a quick glance of posts to browse for ones that would interest them. Additionally the liked the searching capabilites for the directory. While some users may want to browse through these secctions the users I asked agreed when they said they'd likely go to this sections with a project/publisher in mind, in which case searching would best fit their needs. Some constructive feedback I got was that the home page felt a bit cluttered and that less may be more, but they did like that they could see easily the upcoming seminars! On a future design I would try to reduce the amount of information presented on the home screen.

A8: Mobile Prototype

Design:

Tasked to design a mobile application prototye! We could expand on a pervious prototype we had done in the class or create our own! I love hiking and exploring the Pacific Notthwest so I wanted to design a easy tool that would suggest hikes to users! Users could input the duration they'd like to hiks, the difficulty and how far they'd like to venture away from home. From there my app would suggest a hike/trail the could enjoy! I began with sketches of the kinds of interactions I wanted to be in the application. After that I began to design a high fidelity UI for the application. For this exercise we were suppose to skip the lo-fi phase and jump right into hi-fi mock ups. With that I took the opprotunity to practice Visual Design! I wanted to play with the trending fosted glass style that's so popular right now, I also thought this design asthetic was fitting for the kind of application I was designing. 

Prototype:

I've made many axure prototypes but enjoyed designing with a mobile device in mind. The icons and interaction points needed to be larger then in a web design, and screen size is much smaller so relestate is more valuable! I enjoyed using both Illustrator and Axure to make this prototype. I designed the application in layers which made it easy to make changes in the UI design as I built the prototype in Axure. You can visually design an entire UI but it isn't until you interact with the Axure prototype that you remember the need for elements like a back button. : ) Finding missing essential items like this early in the design phase is critical, and axure prototypes allow you to do that!

AXURE LINK CLICK HERE

 

Anaysis:

I got tons of feedback about the visual design. Users thought the application was beautifuly designed. They also liked the flow of interactions and felt that the tool would help them find a hike/trail near them that met their needs. They had questions though about what defined difficulty levels? It may be helpful to implement an information icon to elaborate on the details, or have the user be able to expand this choice if necessary to possibly include more elements other just elevation changes, such as trail hazords. An additional thing a user suggested I add, is senic preferences. This user is an avid waterfall lover, and would like to be able to find hike around there he could see new rapids/waterfalls. Lastly, users liked the amount of detail that's displayed for each of the suggested hikes, including the icons about each of the hikes.

A9: FINAL PROJECT

Collaborative HUB

An online space to help collocated teams feel like they in the same cubical.

 

Goal - To design a desirable space to help separated team members collaborate efficiently and effectively.  I want to find out if this tool I'm designing would be desirable if elements/features I'd intriduced would help users ease some of the challenges faced by geospatially diverse teams. My idea take on foundational elements from many existing collaborative platforms. I wanted to take the best from each of these spaces and design a prototype of a new system that achieves collaborative space design requirements based on my literature research. From my literary research, i found it was important for users to be able to understand the “who/what/where” within the space. Demonstrated below in the a table from, Miguel A. Teruel’s "A Comparative of Goal- oriented Approaches to Modelling Requirements for Collaborative Systems."

Prototype: paper prototype & Axure lo/med fidelity clickable prototype

For my 419, Concepts and HCI course I’ve been doing extensive research on design requirements for successful online collaborative spaces. I want to make a prototype that has rationale for design decisions that is grounded in the research I’ve done. I want to sketch and test an initial UI design to identify if concept, iconography and layout make sense to user. I’ll then ideate on this feedback and implement a lo/med fidelity clickable prototype using Axure. The system requirements would be to:

  • Allow user's to be able to easily and quickly understand the "who/what/where" within the design, these components were identified as items that were essential for users success in online collaboration. (from reading by: Teruel) 

    • The “who” would be addressed through the use of an advanced avatar system that change based on the users participation in the HUB (from readings by: Erickson & Schroeder).

    • The "what" will be addressed through giving the user the necessary contextual information, task information and socio-emotional information. This allows user's, regardless of their geospatial location, access to the same amount of context and face-face work relationships (from reading by: Aragon)

  • Additionally it'll encourage cross discipline collabortion - the hub would have a sophisticated search feature that would allow users to collaborate and search via user added tags so users can use and build on existing knowledge (from reading by Lee). 

I'll evaluate my designthrough user testing at both the paper prototype and Axure phases I’ll ask users to complete a set of tasks. Depending on the ease of completion of these tasks I’ll determine of the design feature in question was a success/failure at the point of the user tests, if I feel the task wasn’t completed with the ease I desired, I’ll ask the user to elaborate on the difficulties of the task and ask them to suggest alternative solution. Some tasks I’d like to test for:

  • Have users to establish which document certain users are currently using

  • Which users are active in the space vs those we are dormant/offline

  • Identify a task card with about a topic of interest.

  • View a expanded card 

    • establish who is editing the file currently

    • where they’d view previous versions

    • add a topic tag to the card

  • Initiate a conversation with the team

 

AXURE LINK CLICK HERE

 

Analysis:

Paper Prototype Tests:

The paper prototype tests were very successful, in this phase I was looking to see what users thought  of the overall layout of information. I asked them to interact with simple shapes and describe what they thought they would produce or what they would mean. I also asked high level questions to tie back to the concepts I'd been reading about - for instance because I couldn't demonstrate the ideas of "live" avatars in nither the paper nor the axure prototype I asked questionabout their thoughts on current avatars they interact with, and asked how they'd improve those systems and finally their thoughts on animated avatars. From my paper prototype I learned from my users that the overall idea was well executed and that this kind of tool could be helpful for non collocated teams, also from the paper prototype my users said they felt the tool was usable, although the vague titles did cause some confusion.

 

Axure Prototype Tests:

The axure prototype was also successful. I kept it mostly black/white - low fidelity because I'm working to iron our the layout of informaiton and the featuers to implement. I added titles and had a scenario so that my users would have a sense of context before they began to use to tool. The only color I added was for the user icons. For the axure prototype I walked user's through a scenario to see how they'd interact with the tool. These tests didn't go as smoothly as I'd hoped but they commented that their confusion was mostly with labels or lack thereof. But the overall take away from the tests was that the overall idea was great. From my tool they could see who was in the collaborative space and what the content was about. They also felt that this platform could give them a better sense of who is in charge of which projects, and the chat areas would make for easy and quick communication regardless the distance. Because of the protptypes low fidelity, it was hard for users to say whether they'd be encouraged to use the tool, but upon further iteration they said their oppions may be changed. 

 

To conclude - this prototype still has many areas of improvement. I'm going to continue to work on it and build a annotated wireframe that goes into additional details about the rationale behind the design decisions that I grounded in my research. Below you'll find the literary sources I referenced. 

 

  1. Teruel, Miguel A., et al. "A Comparative of Goal-oriented Approaches to Modelling Requirements for Collaborative Systems." ENASE. 2011.

  2. Aragon, Cecilia R., et al. "A tale of two online communities: Fostering collaboration and creativity in scientists and children." Proceedings of the seventh ACM conference on Creativity and cognition. ACM, 2009.

  3. Schroeder, Ralph, and Ann-Sofie Axelsson, eds. Avatars at work and play: Collaboration and interaction in shared virtual environments. Vol. 34. Springer Science & Business Media, 2006.

  4. Lee, Jinhee, Carlos Perez-Rubio, and Ken Mohr. "Liquid Labs: Designing for Collaboration." Laboratory Design News. N.p., 9 Oct. 2014. Web. 07 May 2015. <http://www.labdesignnews.com/articles/2014/10/liquid-labs-designing-collaboration>.

  5. Seebacher, Noreen. "Discussion Point: Creating Long Distance Collaboration and Teams." CMSWire.com. Managing Your Digital Ecosystem, 11 Nov. 2014. Web. 08 May 2015. <http://www.cmswire.com/cms/social-business/discussion-point-creating-long-distance-collaboration-and-teams-027121.php>.

  6. Gartenstein, Devra. "Advantages & Disadvantages of Collaboration Between Businesses." Business & Entrepreneurship. Demand Media, n.d. Web. 07 May 2015. <http://yourbusiness.azcentral.com/advantages-disadvantages-collaboration-between-businesses-12644.html>.

  7. Erickson, Thomas, and Wendy A. Kellogg. "Social translucence: an approach to designing systems that support social processes." ACM transactions on computer-human interaction (TOCHI) 7.1 (2000): 59-83.

  8. Kraut, Robert E., and Resnick, Paul. Building Successful Online Communities : Evidence-Based Social Design. Cambridge, MA, USA: MIT Press, 2012. ProQuest ebrary. Web. 26 April 2015.

    Copyright © 2012. MIT Press. All rights reserved.

     

bottom of page