Wednesday, November 4, 2009

Preparation for Week 13 - analyzing design activity

In two weeks an analysis of design activity will be due in class. Respond to this post with what type of data you might like to analyze to "try out" the ideas we've been discussing in class. I've included some information on how much data there is (you will only have a subset - so that this is feasible!).
  • Drawings of design professionals' representations of design (typically 1 page - 18 cases)
  • Debriefs in response to a model of design (24 freshmen, 26 seniors, 18 professionals) (typically 2-5 pages)
  • Think alouds of individuals designing a playground (24 freshmen, 26 seniors, 18 professionals) (typically 15-25 pages, typically 1-3 hours of activity)
  • Videotapes (with transcripts) of 2 engineering design team meetings (7 professionals) (1.5 hour meetings, transcripts around 20 pages)
  • Videotapes (with transcripts) of 2 architect design team meetings (architect and 2 clients) (1.5 hour meetings, transcripts around 20 pages)
  • Design reports - defrief on "lessons learned" of engineering freshmen on a design team (18 teams, typically 1-2 pages)
  • "Cases" of design processes in industry from the Design Council
The data will be posted on Blackboard. I'll also try to put folks in teams so you can talk through what you observe and why.

    17 comments:

    mpollock said...

    ENE Thursday Seminar 11/5- Social Justice



    Our guest speaker for the ENE Thursday Seminar from Australia (UWA), Caroline Baillie, spoke about her work with her cross-disciplinary design teams for social justice. This is the website for their project... Check it out.



    http://wasteforlife.org/

    Robin may be able to provide more information about Dr Caroline's work.

    Unknown said...

    I would like to try out the:

    "Think alouds of individuals designing a playground (24 freshmen, 26 seniors, 18 professionals) (typically 15-25 pages, typically 1-3 hours of activity)"

    data

    Unknown said...

    Fish and Scrivener got me thinking about how sketches are personalized by designers. Each designer has their own denotation/decryption techniques, which eases their recognition and triggers their memory. For instance, I tend to make up and utilize acronyms and other shorthand symbols in my sketching. So what does this denotation technique say about my designs or about how others do their designs? Are there certain sketching strategies that lead to improved designs, or is the success of sketching solely dependent upon experience and environment? Also, as Goldschmidt hinted at, how do sketching strategies differ by profession? I would imagine that engineers use far less sketching than architects, instead of using sketching as a brainstorming activity, I would think engineers use sketching more to explain a concept. One last random thought, are these studies also considering the body language used simultaneously with the sketches (e.g., pointing and hand gestures used to explain a drawing)?

    Robin said...

    Well - you don't want to hear this (ha ha) - but Visser does research on gestures in design practice - some interesting stuff :)

    tforin said...

    I'll take the design reports of freshmen's lessons learned.

    Laura said...

    One thing I found interesting in the Fish/Scrivener article was the comparison of descriptive vs. depictive sketches. Based on their description, I originally thought it as sketches were only depictive representations. I think of sketches as being drawings that represent something physical, thus details of it can be extracted just by looking at it. But reading about their idea of descriptive representation made me really expand my definition of what sketching is. If a sketch is also a descriptive representation, it has elements that can be interpreted in different ways or that only make sense by being defined - this made me think of sketches as possibly being a representation of a process, or a physical object and a process. I suppose when we have discussed sketching for design in the past, this is actually what was meant by it, but it took me reading these descriptions of different representations to really consider what it meant.

    Bethany Fralick said...

    Re: Kevin
    I want to respond to Kevin's question: Are there certain sketching strategies that lead to improved designs, or is the success of sketching solely dependent upon experience and environment?

    I do not think certain sketching strategies lead to improved designs. I feel that simply the use of sketching improves designs. Sketching is one of those tools that engineering provides one with. It is funny that you mention experience. I think experiences definitely contribute to your certain style and the different ways you sketch. This could increase the detail or information you include in your sketch, therefore making it more valuable.

    Anne said...

    I might be interested in the Design Council stuff, but the hyperlink is broken, so I'll try to search it out later.

    Otherwise, I'll go with "Think alouds of individuals designing a playground (24 freshmen, 26 seniors, 18 professionals) (typically 15-25 pages, typically 1-3 hours of activity)"

    ... but I'm honestly not sure.

    Andrew O said...

    I think that I would like to look at the design reports from first-year engineering student teams.

    mpollock said...

    I'd like to look at: "Cases" of design processes in industry from the Design Council.

    Bethany Fralick said...

    I would like to look at the data on the design professionals drawings.

    Unknown said...

    Anne,

    If you are still analyzing the think-alouds, then I suggest we both look at Isaac, George, and Barbara. They had the minimum, average, and maximum quality scores respectively.

    Unknown said...

    good choices kevin and anne :)

    Unknown said...

    Thoughts on Ryan and Bernard plus my experience analyzing data:

    - I like the idea of using teams to go through the data to help mitigate researcher bias, however this may bring about a different bias if not done properly. For instance, I have noticed that when discussing my PhD project, I get completely different advice depending on if I meet with my committee as a group or individually. In the group setting, I have found that the group will usually reach consensus on what the “Godfather of Civil Engineering” says, leading again to researcher bias. However, individually, I can have different perspectives which is far more helpful to me.

    - Knowledge of the solution-space (either from expertise or from experience with giving the problem) seems essential to me when doing think-aloud studies. With a thoroughly mapped solution-space it would be far easier to pick up on ‘missing information’ when analyzing quantitative data.

    - I had a lot of the same questions as the “Further Research” section in Ryan and Bernard. Has any progress been made on these research questions?

    - With so much uncertainty in qualitative analysis, has anyone applied sensitivity/risk analyses? For example, if a researcher has a hard time distinguishing between two design steps from a think-aloud transcript, do they ever go back to check how their conclusions would change if they picked the other design step? If a researcher is certain that a paragraph refers to one of two things and the selection of either can be proved negligible with regards to the conclusion reached, then would not this add further validation to the study proving that the uncertainty can simply be accepted. Or take the other case, would not it be interesting to give a range of conclusions depending on the decisions made? For example, for my Masters I analyzed this very problem with programming construction projects based on uncertain travel demand forecasting models. I basically said that: we are unsure of our model inputs but are sure that they vary within this range; this leads to a range of outputs; which yields a portfolio of projects to consider, which is an improvement over picking projects based on a single set of uncertain inputs.

    Unknown said...

    Just as with any kind of research - in qualitative research the researcher has to build a case for what they observe. Creating a coding scheme and having multiple coders code the same data is one way to characterize the extent to which the coding scheme is reliable (e.g., calculate inter-rater reliability = extent to which people would code data in the same way). Situating findings in prior research is another key way (this establishes relevance of the findings in a broader theory). Another key issue is establishing a chain of evidence - multiple accounts within the data set that illustrate that the findings are a consistent theme in the data and may be interpreted in a particular way. In other words, it's a relatively complicated process of finding evidence in the data and establishing the relevance and logic in a broader set of formalized ideas (other empirical work).

    Again - push on this idea of researcher bias - and think of it more as researcher's interpretation BACKED up by evidence.

    The risk sensitivity analysis actually sounds like qualitative research - where a goal is to describe the space of issues that play a role rather than make claims about complex causal relationships.

    Anne said...

    Sorry Kevin, I'm doing the Design Council.

    I'm looking at Starbucks with respect to how they adapt their product to a global market.

    Laura said...

    Something I found interesting about my analysis of design data (the analysis of the professional's sketches of what design is) was connecting what the professionals thought were the most/least important phrases about design with their sketch. A lot of times what people chose as most important sentences about design weren't even really represented on their sketch - and sometimes, what people chose as least important were included on sketches. Very few people were actually completely consistent between their most/least important phrases and steps and their representations of design.