Assignment 5 – Cassandra

Posted: March 19th, 2013 | Author: | Filed under: Assignment 5 | No Comments »

Structuring Information for Peer-to-peer Learning

(aka. swapping out your brain for someone else’s)

Models Google’s mission statement is to organize the world’s information, but rendering knowledge in searchable form is only part of that problem. The other part of the problem is to provide structure to information. As Ashby framed in the end of his 1956 book [1], intelligence amplification is mainly a “selection” problem. We are simply presented with much too information to utilize. Some of this content is wrong or low quality. In contrast, high quality content is both factually correct and reusable in the sense that one high quality idea can be used to beget many new secondary ideas. This type of high quality content with a large impact is a model. In fact, the objective of the entire scientific discipline attempts to take vast amounts of downstream observations and organize the observations in such a way as to be explained by or inferred from a minimal number of upstream models. The quality of a model is judged by both its ability to explain current observations as well as to make predictions for the future. The structure provided by these upstream higher order models (and not the downstream observations) comprise intelligence. Some of Google’s information is indeed about these models, but it represents information about models equally and in the same fashion as other information. It may be of interest to instead treat models as a separate and more powerful class of knowledge when indexing world knowledge. Life Models There are many classes of models. Scientific models are the subject of academic research. There are also implicit life models that each individual assumes. These individual models can arise from (1) conscious deliberation, or they might simply be (2) implicit habits picked out through life. Indeed, a considerable part of variance in cognitive intelligence seems to arise from a person’s deliberate selection of a world model and their daily habit patterns. These two types of models, conscious models and habitual defaults, may serve as vessels for transferring intelligence models from person to person. As a user scenario, imagine an application for a wearable like Glass that lets you “wear someone’s habits”. If you find a daily routine that works well for you, you can share it as an effective habit schedule for other people to try. Alternatively, you can try out the habit schedules of others by downloading them and having Glass prompt you with just-in-time situational alerts for executing a particular habit. This device would allow you to be more productive by emulating people you respect. Or it might let you “live the life of someone else” as an interesting costume change. Or it might serve as an therapeutic mechanism for breaking out of daily habit loops. This type of device also has interesting implications for collective intelligence. By crowd-sourcing questions to such as “what are the most effective patterns for daily life?” and “which belief frameworks are powerful?”, we can curate a haul of answers from the world’s population. Each person on the planet has either explicitly or implicitly reached a personal answer to these questions. Our only task is to render their answers into forms easily adoptable by other persons. The task can be phrased in a variety of ways. How can we share our individual brain structure with others? How can we share our world models with others? How can we share our patterns with others? Two different approaches are proposed in the next section. Implementation a) The Logical Approach Perhaps the most obvious approach is to condense our individual philosophies into their purest most form and explicitly express them. In this case, the technical challenge is to design simple descriptive languages [__] for models of this type. Interested parties may wish to swap out their current model for a new one, consciously and deliberately reprogramming themselves. Such a descriptive language for specifying world models will likely require both what and why components. The what-component can be modeled after rule-based logic systems [__] of the style “WHEN [EVENT A], DO [THING X]” The why-component that converts rules into reasons can be used for building hierarchical trees that can be further used converting reasons to higher-order value systems. In a continuation of the habit-based example above, some may want to adopt the morning routine of Mr. Rogers, the beloved actor from the children’s show Mister Rogers’ Neighborhood. Mr. Rogers’ morning routine [4] can be summarized as “waking up at 5 a.m.; praying for a few hours for all of his friends and family; studying; writing, making calls and reaching out to every fan who took the time to write him; going for a morning swim; getting on a scale; then really starting his day”. The first few lines of a Mr. Rogers daily program could be written as: (what) WHEN 5AM, DO wake up. % (why) “Because time is precious” (what) WHEN DONE, DO pray for friends and family % (why) “Because I believe in God” AND “I love my friends and family” Interchanging models for those of others will causes us to swap out our routinized defaults for a new set of defaults and will likely result in a cascade of effects. It may be particularly advantageous to replicate models of the people that you admire, for instance, an established researcher in your discipline. It may be fun to adopt the models of one of our ancestors, for instance, to live a day as your grandmother would and capture her behavioral and life lessons. It may be a literal way of experiencing “life in someone else’s shoes”, and feel more connected to either strangers or people that you know and care about. b) The Chaos Approach The alternative approach is to give up on rationalizing behavior and instead attempt to infer method from madness. Consider the following design of a file system designed for brainstorming. First note, that it is possible to infer the content of someone’s brain from her computer file system, particularly when the computer is used for storing and organizing ideas (as in the case of a researcher). It is possible to organize the file system in such a way that it can be used for brainstorming if specific structural rules are followed. My personal file system is organized in such a way to facilitate brainstorming by having two searchable folder ‘mentors’ and ‘me’. The ‘mentors’ folder contains plain text versions of the most inspiring research papers. The ‘me’ folder contains subfolders with plain text ideas of my own and plain text notes inspired by the content of presentations and meetings. The logic of this organization is inspired by two research papers with powerful ideas: “What Would They Think” [2] and “Remembrance Agent” [3]. First, the idea of the Remembrance Agent to look at what you are reading and writing and propose past content based on related keywords. Second, the idea of What Would They Think is to show the affective reactions of respected mentors when reading new content. We can combine the two ideas to address Ashby’s “selection problem” identified in the first section of this paper– that a large component of intelligence involves the selection of the most useful information out of the large pool available. The proposed file system for brainstorming searches keywords against the ‘mentors’ and ‘me’ directories to bring up relevant material as well as near-miss material that are less relevant but serve to bring in tangential ideas. The important idea here is not my file system itself but that if multiple users organize their file systems in the manner described, the file system becomes an external representation of an individual’s mind. File systems then become interchangeable in such a way that I can brainstorm with someone else’s personal ideas and mentor preferences. The file system brainstorm serves as an information dump of someone’s mind that has not been distilled into logical rules but instead persisted as the chaotic pile of quirks and nuances that embody ourselves. Proposed Artifacts To recap, the artifacts proposed within this paper were:
  1. Wear someone else’s habits wearable for REPROGRAMMING SELF WITH SOMEONE ELSE’S HABITS
  2. Remembrance Agent + What Would They Think file system for BRAINSTORMING WITH SOMEONE ELSE’S MIND
Conclusion Both proposed systems propose the augmentation of intelligence facilitate the spread of powerful habits and ideas, giving us the framework to swap out our minds for the more preferred and intelligent minds of others. The proposal is only for systems to provide additional structure and transparency to knowledge that the world already has. Control remains firmly in the hands of each individual who has suddenly been endowed with the capability of self-intelligence augmentation. References
  1. Ashby, W. R. (1956). An introduction to cybernetics. Taylor & Francis.
  2. Liu, H., & Maes, P. (2004, January). What would they think?: a computational model of attitudes. In Proceedings of the 9th international conference on Intelligent user interfaces (pp. 38-45).
  3. Rhodes, B., & Starner, T. (1996, April). Remembrance Agent: A continuously running automated information retrieval system. In The Proceedings of The First International Conference on The Practical Application Of Intelligent Agents and Multi Agent Technology (pp. 487-495).
  4. Hattikudur , Mangesh. 15 reasons Mr. Rogers was the best neighbor ever. http://edition.cnn.com/2008/LIVING/wayoflife/07/28/mf.mrrogers.neighbor/index.html
   [Slides from class]


Leave a Reply