Tuesday, September 30, 2008

Reputation Systems for Self-Organized Networks

 

Reputation Systems for Self-Organized Networks

Reputation Systems for Self-Organized Networks

SONJA BUCHEGGER, JOCHEN MUNDINGER, AND JEAN-YVES LE BOUDEC

 cooperation

 IMAGE ZOO/IMAGES.COM/GETTY IMAGES


Self-organized communication systems such as mobile ad-hoc, Internet-based peer-to-peer and wireless mesh networks have received increasing attention, in terms of both deployment and research. 
  They are typically organized according to the peer-to-peer (P2P) organization principle. That is, participants in the system are equals in that they have equivalent capabilities and responsibilities – they are peers. Such P2P systems can also be found in a variety of other networks, for example social or biological networks. Thus it is not surprising that there is a wealth of problems in these communications systems that are also of interest in other disciplines.
  In the novel Fourth Generation (4G) paradigm as well, there are self-organized components such as ad-hoc connectivity among spatially close nodes. 
  One of the major issues in such self-organized communication systems is that of cooperation. Typically, users are concerned primarily about their own benefits and thus cooperation and fairness cannot be guaranteed. This selfish behavior is called free-riding and is a well-known phenomenon in economics. The free-rider problem is serious enough that the service might not be provided at all or may be provided without sufficient quality of service [15]. Effects can be detrimental as shown, for example, in Internet-based P2P networks [6].
Although altruistic behavior has been observed, it is not clear to what extent this will help in communication systems. Altruism is the practice of being helpful to other people with little or no interest in being rewarded for one’s efforts [1], [8], [14].
  Incentive mechanisms (pricing mechanisms as well as rules) and artificial immune systems [16] have been proposed and investigated to address the issue of cooperation in communication systems. 
  Two other problems often incurred in P2P networks are malicious attacks and random failures. Reputation systems [5] address both these issues as well as incentive problems. Here, users keep track of their peers’ behavior and exchange this information with others in order to compute a reputation value about their peers. Reputation values are then used to decide with whom to cooperate and which nodes to avoid, i.e., users with a good reputation are then favored [4]. 
  Reputation systems have already proven useful and are popular in online auctioning systems such as eBay [17] or online book stores such as Amazon. However, unlike self-organized communication systems, those have a centralized component. 
  We next look at reputation systems in more detail before addressing fundamental features that a reputation system should have as well as fundamental questions that need to be addressed.

Terminology and Classification of Reputation Systems
Reputation systems have been studied and applied almost separately in diverse disciplines such as economics, computer science and social science, resulting in effort duplication and inconsistent terminology. Even within computer science, research activities have not been very consistent, and have almost evolved separately in the artificial intelligence, Internet-based P2P, and Mobile Ad-Hoc Networks communities. In fact, there is not even a consistent definition of reputation itself, nor, closely linked, of trust. 
  As a convention, following the Oxford English Dictionary, we shall adopt that reputation is an estimate about a person’s actual quality. Person is the appropriate term for social networks. In the context of computer networks we shall replace “person” with user (of the system), node (in the network), or simply a peer. Similarly, quality refers to the behavior that is of interest in a given context. For 4G, it might refer to packet forwarding behavior. 
  Moreover, we shall adopt that rater reputation refers to the reporting behavior within the reputation system. This is different from our earlier definitions [4] where we referred to rater reputation as trust. Although, again, the definition in the Oxford English Dictionary of trust fits our previous usage (”Confidence in or reliance on some quality or attribute of a person or thing, or the truth of a statement”), the term trust has been used in several different ways in the literature, also synonymously with reputation, and has therefore become too ambiguous for our purposes. 
 As opposed to reputation, rater reputation values are based on compatibility and thus indicate agreement.
 As for classification, there have been various surveys in different computer science communities using partly overlapping criteria. However, most of the surveys are relevant in all the other communities, too. For further information, the reader is referred to [10]. 
In this article, we focus on fundamental features and questions that concern reputation systems in general, independently of the application domain and only mention specifics for self-organized networks when they deviate from generally applicable aspects of reputation systems.

Features That a Reputation System Should Have
The basic premise of a reputation system is that one can predict future behavior by looking at past behavior. This does not hold for all cases, since there can be erratic behavior that is completely inconsistent with past behavior, as in the case of sudden failure. But the assumption is that such cases are the exception and not the norm and that past behavior can be used as a basis for the prediction of future behavior.
  To provide this basis, the reputation system has to keep track of past behavior. This can be done in several ways. Here are some decision points to guide the design process of a reputation system:

  What information is kept? About whom? Where? For how long? When is information added? How is information from others considered? How is it integrated? What does this information look like over time? What has to happen to change this information?

 BUCHFIG1

Fig. 1. Reputation system flow.

  In summary, a reputation system needs a way of keeping information about the entity of interest, of updating it and of incorporating the information about that entity obtained from others. This provides the basis of decision making. Then the decision making itself has to take place to allow nodes to chose other nodes for cooperation.
  Humans can look at graphical representations of reputation such as the number and color of stars on eBay, and glance over some qualitative information given in feedback comments. In self-organized networks, we want the reputation system not only to be able to present information about reputation to the user, but also to make automatic and autonomous decisions. The reputation system therefore has to have a mechanism to make decision and classifications. In this article, however, we concern ourselves mainly with the reputation information itself. 
As time passes, the importance of parts of the reputation data collected can change. For instance, recent steady behavior is probably a better predictor of future behavior than behavior observed a long time ago. On the other hand, looking only at the most recent behavior can yield a distorted picture of past behavior as one instance observed is not enough to measure a trend. Reputation systems need to have a way of factoring in time in a reasonable manner that would either conform to the user’s expectation or be proven to work well in the system environment.
Fig. 1 shows the workings of an idealized reputation system. Several sources contribute to the generation of reputation values: direct (own) observations, indirect information from others, and time passing. Once the reputation value is determined, the subjects of interest can be classified and reactions according to these classifications can be triggered. 

Keeping Track of Past Behavior
Reputation is a function of past behavior and time, so a reputation system needs to collect data about past behavior. These data can be stored in a centralized or in a distributed way. For self-organized networks, a distributed storage of reputation data is needed, as there is no infrastructure in place to reliably ensure access to a centralized reputation authority.
  The reputation system has to offer a way of collecting information about the entities of interest. In a self-organized network these entities of interest are neighboring nodes and nodes that are on any communication path to other nodes. Nodes can join and leave self-organized networks. There is a tradeoff between performance and overhead of reputation systems in terms of which entities to keep track of, as nodes may be short-lived in a network and every entry entails both storage and potential maintenance cost. 
  In addition to the decision about which nodes to keep track of, and the feasibility of doing
so in a particular instance of a self-organized network, there is the question of which data to collect about the behavior of an entity of interest. Assuming binary behavior, i.e., behavior is either good or bad, cooperative of defecting; the basic choice is between keeping track of the volume of good/bad behavior and the ratio of good to bad behavior, or both if the chosen algorithm for calculating a reputation value can take advantage of it. 
  A good reputation should be a reflection of good behavior. Just what constitutes good behavior needs to be clearly defined in order to determine how reputation should be calculated given data about behavior. Using the ratio of good to bad behavior, for instance, reflects the willingness to cooperate only in relation to a specific extent of demand or opportunity for cooperation. This extent (say the number of cooperation requests) is lost in the ratio and therefore unknown. The explanatory power of a cooperation ratio is thus limited. Conversely, if the absolute number of cooperation instances (or misbehavior, respectively) is taken as the basis for reputation calculation, it is not known what the number of opportunities were out of which the behavior was good or bad. A combination of ratio and volume calculations would capture willingness to cooperate in relation to opportunity.

Incorporating Data from Different Sources
There is a tradeoff between speed and accuracy. The more second-hand information is used, the faster an estimate of some subject’s behavior can be obtained; however, the more vulnerable it also is to liars. In order to be useful, reputation values need to be accurate, at least to some degree. This has to be assessed. 
  Whereas direct observations should always be accepted, second-hand information should be accepted only if considered as likely, i.e., only if it does not differ by more than is acceptable (e.g., as measured by a threshold Δ or other suitable metrics) from the user’s current reputation value. This behavior is comparable to the concept of confirmation bias in the social sciences, and can also be motivated by observations in everyday life. Even if accepted, one might want to weight them by a factor ωweight.
  Confirmation bias is an example of human faulty reasoning, discarding information (facts) that do not fit a theory and favoring confirming information. The use of such a fallacy might seem counterintuitive. Besides the motivation of excluding spurious information rather than protecting an already formed opinion, there are differences in how we employ confirmation bias that allow us to use it as a rational tool. First, the node’s own information, i.e. its own observations are not subject to confirmation bias, all direct information is incorporated as-is. Second, the observed behavior is binary (cooperate/defect) and made with high certainty, hence trust in the node’s own judgment is justified. These two points amount to what corresponds to undeniable facts in human confirmation bias. When facts are undeniable, they are included despite the bias toward only accepting facts that confirm one’s belief. Third, the confirmation bias is applied only to decide whether to accept third-party indirect observations and thereby is a measure of compatibility with a view on reality with a high probability of accuracy. This rests on the assumption of behavior being constant over different interaction partners. This can be compared to collaborative filtering.

Forgetting Reputation Over Time
Taking into account the passing of time in a reputation system allows for two features: emphasizing the importance of behavior at one time over another (e.g., of recent behavior over behavior observed long ago), and providing the possibility of revising the action toward a node that was triggered by a particular reputation value (e.g., redemption of a node after it has been repaired). We suggest to include a discounting with a factor ρ, so that old observations gradually become less important. This is a form of forgetting.

Secondary Response
To avoid that such forgetting backfires, a mechanism such as secondary response can be introduced which provides increased sensitivity to misbehavior by nodes that have been deemed misbehaving in the past. By increased sensitivity, appropriate action can be triggered faster than in a regular case.

Questions for Reputation Systems
There are some fundamental questions regarding effectiveness and robustness that need to be addressed for reputation systems in all communities.

• What is the impact potential liars could expect to achieve on the reputation value about the
subject in question?
• Which kind of information should be passed on to other nodes to achieve an accurate reputation system? The comparison of two different scenarios as regards to the second hand information. In the first one, Reputation – based on all previous observations including indirect observations – is passed on as second-hand information. In the second scenario, only Direct Observations are passed on as second-hand information. What difference does this make?
• What strategies can an attacking node employ to distort the reputation system, in addition to lying?
• How can the reputation system recover from false positives or negatives?
• What is the impact of incomplete information? Especially in distributed reputation systems, such as those for self-organized networks, nodes only have a partial view of the environment, only a subset of all peers and only a subset of the behavior of these peers is known.
• What is the impact of wrong observations? These can happen due to the inherent lack of unambiguous observability such as the difficulty of distinguishing between deliberate packet dropping and congestion, mobility, and loss of connectivity in wireless networks.
• Why should nodes participate in a reputation system? Is there an incentive to cooperate and contribute to a reputation system, and to do so honestly?
• How accurate and fair is the reputation system? How well do the reputation records represent past behavior? The goal of reputation systems is usually twofold: to enable nodes to find good peers and to give an incentive for cooperation. It is not straightforward to do this in an accurate and fair way, taking into account not only the actual instances of cooperation, but both the opportunity and the willingness to cooperate. What is the metric for accuracy?

Methodology to Answer Fundamental Questions
To answer fundamental questions about general reputation systems independently of implementation details, we suggest the consideration of an abstract model supported by simulations and measurements.
  For the modeling part, we are not concerned with the detection and response components of a system, but focus on the actual formation of reputation. The detection component depends on the application scenario and we merely assume that misbehavior can be told apart from good behavior. Moreover, we assume that if reputation values can be computed accurately, then there exists a response mechanism using them to obtain the desired effects. Typically, this might mean exclusion of the misbehaving user from benefits.
  We formulate a stochastic process formulation, based on which we derive an ordinary differential equation by averaging the dynamics and passing to a fast-time scaling limit. That is, we scale time so that events occur more frequently, i.e., users make observations at a higher rate, but at the same time the impact of each observation is reduced by the same factor. We then derive the solutions of the differential equation and study their fixed points. Thus, our approach can be called a mean-field approach [9]. Moreover, we use simulation and direct computation to confirm the analytical results.
  Reputation or recommender systems can collect opinions about the quality of objects such as films. Here, we are concerned with reputation systems for self-organized networks where nodes give reputation ratings about other nodes, i.e., active subjects and peers in the network. 
There are no benchmarks available for the simulation of reputation systems in self-organized networks. It is customary to simulate reputation systems in self-organized networks by augmenting the simulation of regular network behavior by a reputation system component and simulating specific scenarios of node misbehavior.
  The parameters for regular network behavior typically include the number of nodes, at least an initial topology (in the case of model Internet-based peer-to-peer networks, potentially also topology control of the overlay network), and a routing protocol. Simulations of wireless networks additionally include a mobility model, assumptions about physical characteristics of devices, and terrains.
  Simulation of self-organized networks without reputation systems already offer many potential pitfalls, including in the choice of parameters that make the obtained results difficult to reproduce or make it difficult to achieve statistical significance and to generalize results. Therefore, additional care has to be exercised when simulating reputation systems on top of regular network behavior. The model for node behavior (and misbehavior) of both the network (e.g., forwarding) and the reputation itself (e.g., lying) strongly influences the results and determines the scope of the conclusions that can be drawn from these results. Using a wide range of scenarios and node behaviors (threat models, attacker/failure models) can help expose vulnerabilities of a reputation system by simulation.

Lessons Learned
Motivated originally by observations in everyday life as well as by research in the Social Sciences, and supported by our analytical modeling as well as simulation results, we have learned the following for the design of a reputation system.
 
Lesson 1: Deviation Tests Mitigate Spurious Ratings
Using a test to evaluate each piece of information with respect to how it conforms to a node’s own view, i.e., quantifying the congruence of views necessary for confirmation bias, turns out to be a more fine-grained and adaptive approach than only considering the rater reputation of the node providing the reputation information. The deviation test works as follows. Every time a node receives reputation information from another node, it has to decide whether and how to consider this information. It compares the received information with its own prior knowledge and only accepts it if the deviation is less than a specified acceptable deviation, otherwise the received information is discarded. This produces a non-linear effect.
  More precisely, we find by analysis [11]-[13], that in order to have an impact, the number of liars Nl in the network needs to exceed a certain threshold. That is, there is a phase transition behavior. The phase transition behavior can be phrased in terms of the parameter Δ (the deviation threshold) rather than in terms of Nl. If Δ is below a certain threshold, that can be computed, the liars have no impact.

Lesson 2: Discounting Adds Resilience 
Giving more weight to recent behavior and discounting past behavior as time passes achieves two objectives: better correlation to future behavior and allowing for node redemption. When past behavior is discounted, nodes cannot capitalize on previous good behavior but have to consistently behave well to maintain a good reputation. Information about nodes has to be constantly reinforced to stay current. Node redemption allows for a node to regain at least a neutral reputation after a specified time period (determined by the discount rate) without bad behavior. This is crucial for dealing with formerly faulty nodes that have been repaired, and useful in general to adapt to behavior changes of nodes regardless of the reason.

Lesson 3: Passing on First-Hand Information Only (Direct Observation) Improves Accuracy
Passing on information received from others, as opposed to direct observation, i.e., rumor spreading turns out to not only offer no gain in reputation accuracy or speed, but also to introduce vulnerabilities by creating a spiral of self-reinforcing information [3]. 
This is confirmed by a theoretical analysis. We find that the performance of Direct Observation Reputation coincide on some range (namely, if and only if the θ > 2Δ, where θ is the probability that a well-behaving node is indeed observed as behaving well), but that otherwise Direct Observation is more robust against liars. For Reputation, second-hand information does not improve accuracy, whereas for Direct Observation it does. Overall Direct Observation is better.

Lesson 4: Secondary Responses Accelerate Classification
To offset a potential vulnerability of granting redemption (by means of discounting as described above, to a node previously classified as having a reputation rating too low for cooperation) and thereby providing a chance for misbehavior, the mechanism of secondary response has turned out to be useful. Secondary response is inspired by the human immune system and means, in the context of reputation systems, that the tolerance of misbehavior is reduced for nodes that have been deemed misbehaving previously.

Lesson 5: Identity is an Issue
If the identity of a node cannot be established at all or only for a short while, the accuracy and robustness of reputation systems suffer. While in some environments the need for reputation information might be limited to short periods of time (e.g., establishing a path in a network) and thus can deal with short-lived identities, in general a reputation system needs to have longer lived identities to make use of its features. In general, identities have to persist longer than the detection time of a misbehaving node. A fundamental requirement for identity in reputation systems is that one can be sure that an observed behavior has actually been exhibited by the observed node. Identity spoofing, be it impersonation or the creation of false identities (e.g., Sybil attacks [7]), and identity persistence over time are crucial for reputation systems' effectiveness. The assumption of accurate and stable identities, generally used for reputation systems, is a strong one and difficult to realize in self-organized environments. 4G networks potentially can provide a solution to this problem by offering a mix of self-organized networks with access to infrastructure that would enable central authorities.

Issues For Reputation Research
We now indicate conclusions that might be of interest for future research on reputation systems.

Issue 1: Coherent Terminology
It has become apparent that both definition and representation of reputation vary widely even within computer science. While it is debatable whether or not the same definition and representation should be used for all applications, a more coherent terminology would certainly be desirable. Moreover, it would be useful to have a coherent classification of reputation systems.

Issue 2: Coherent Classification 
Based on a more coherent terminology, it would also be desirable to bring together the different strands of research, within computer science, but also between disciplines. This would avoid lots of effort duplication that can be observed at the moment. The reputations research network1 went only some way towards this. In this article, we have attempted to at least point out work on reputation in the different communities. A number of interesting examples of the successful combination of different disciplines (physics, economics, social sciences) can be found in [2].

Issue 3: Fundamental Questions as well as Specific Implementations 
Clearly, specific applications for distributed reputation systems are of crucial importance and many other articles address various scenarios. However, it is also important not to get lost in the details. There are fundamental questions that are important in all these scenarios that should be addressed on a suitable level of abstraction. We have provided a list of them in the earlier section of this article, “Questions for Reputation Systems.”

Issue 4: Models as well as Simulation and Measurements

Finally, computer science research is often based on simulations, measurements, or implementation and testing of prototypes. For example, prototype protocols are typically evaluated using a network simulator such as ns-22 or GloMoSim.3 Apart from game and graph theoretic investigations, there are comparatively few analytical studies although they often provide insight that is hard to obtain otherwise. An example of this is the stochastic process formulation of the model referred to in the “Methodology” section above. Even though results are typically proven to be valid under clearly defined assumptions, some of which might be unrealistic, it is often the case that results are valid at least qualitatively even if the assumptions are violated. In the social sciences context in particular, such approaches are rare although it would be desirable to enhance predictive capabilities of social networks. Moreover, there might yet be other approaches (than game theoretic, graph theoretic and stochastic models) that have not been considered.

Reputation Systems in 4G
4G wireless networks provide new opportunities and challenges for reputation systems but share the same fundamental questions. The additional questions that are specific to 4G networks arise from their combination of self-organization and access to infrastructure, e.g., could 4G address identity better? 4G networks would enable centralized authorities at most times, but also have distributed, self-organized components. More centralization means more data can be kept, identities could be more easily verified by, e.g., public key infrastructures. The reputation system should ideally take advantage of the added benefit provided by mostly-on central authorities but function in a self-organized way. When in disconnected mode or switching back and forth between centralized and decentralized modes, updates need to be consistent; there is a challenge to combine the two worlds. 
  With 4G, nodes can be in more and different networks at the same time. Multihoming and the availability of ubiquitous network access via several technologies in the same physical space enables more decision space for network selection, trading off bandwidth, power, cost, and convenience. Selecting a network for each transaction can be facilitated by using reputation systems that capture the properties of the available networks. Besides direct interaction with the 4G infrastructure by the network provider, nodes can decide to cooperate directly to take advantage of low-cost high-bandwidth local connections to share content (e.g., obtained from low-bandwidth provider links) in real time, or pass on messages in delay-tolerant networks. 
  Reputation systems are useful in cases of cooperation and when making choices in a more informed way. Although the applications for 4G themselves are yet to clearly emerge, it is clear that 4G networks provide cooperation and choice more than traditional networks and we submit that reputation systems are a valuable tool to make 4G networks work well.

Fundamental Features Shared
Reputation systems across different domains share some fundamental features and questions that need to be addressed concerning the accuracy of reputation values and the robustness against spurious information. In addition, 4G networks call for a combination of decentralized and infrastructure, which poses a set of new challenges. We have learned some lessons from the development and analysis of reputation systems in self-organized networks and have developed methodologies to address some of the fundamental questions of reputation systems. For the remaining open questions, we think that using a coherent terminology and classification across disciplines and also addressing fundamental questions independent from their environment by means of both analytic modeling and simulation will help in finding solutions.

Emerging Market Handset Programme


Emerging Market Handset Programme

Since the launch of the GSMA's Emerging Market Handset (EMH) Programme, the mobile industry has driven the wholesale cost of mobile phones to below US$30 and the 3GSM Congress in Singapore saw Motorola, once again, selected to supply the phase two handset. 

Phase two began in June 2005 when the second 'Invitation to Strategic Partnership' was issued to vendors. The GSMA programme, which is chaired by Erik Aas, the Chief Executive of Grameen Phone Ltd. of Bangladesh, selected Motorola on nine strict criteria: price, functionality, logistics capability, service support, brand, marketing support, form factor, usability & strategic commitment.

"Motorola won thanks to a combination of a portfolio starting from sub-US$30, together with other key factors such as after-sales support, local service, brand presence and a choice of low-cost handset models including an exclusive product, the C113a for this programme" said Rob Conway, Chief Executive and board member of the GSMA at the 3GSM World Congress in Singapore.

Motorola submitted two handsets in its proposal - the C113 and the C113a which was specifically designed for the EMH programme. The C113a offers talk times of up to 450 minutes and up to 330 hours of standby, reducing the need for frequent recharging. The handsets will be available early in 2006 when the 10 operators (AIS, Bharti, BPL, Globe Telecom, Hutchison Essar, IDEA Cellular, MTN Group, Orascom Telecom, Telenor and Vodacom) who are supporting phase two, expect to order about 6 million of these handsets from Motorola.

"To get below US$30 per handset is a milestone achievement," saidCraig Ehrlich, Chairman of the GSMA, "Today's news cements the formation of a whole new market segment for the mobile industry and will bring the benefits of mobile communications to a huge swathe of people in developing countries."

The Wider Picture
As part of its wider 'Connecting the Unconnected' initiative the GSMA is also working to tackle the other costs ownership - regulation, taxation & service - in addition to encouraging ways to flatten the user payment curve. The recently published emerging markets taxation study is one such example of our other initiatives in this area. This report reveals that punitive tax levels in some developing countries are pushing up the price of handsets and mobile services beyond the means of many people

Emerging Market Handset Programme - Phase One History

Emerging Market Handset Programme - Phase One History


Context 
The GSM industry is moving quickly towards its next billion connections, the majority of which will come from emerging markets. 

The Emerging Market Handset programme forms a key component of the GSM Association's (GSMA's) "Connecting the Unconnected" initiative that has attracted widespread industry and government recognition. 

Research estimates that in the region of 80% of the world's population have wireless coverage but only 25% of people use mobile services. The research also identifies handset cost as one of the most significant barriers to mobile communications affordability in emerging markets. 

Programme Overview 
In response to this opportunity the GSMA, working with its operator members, created the Emerging Market Handset (EMH) Programme to catalyse the creation of a new Ultra-Low Cost Handset segment. 

The programme is now successfully underway - during the first phase, EMH handsets were supplied through 10 participating operators into over 17 countries including India, South Africa, Nigeria, Democratic Republic of Congo (DRC), Egypt, Algeria, Tunisia, Bangladesh, Turkey, Thailand, Philippines, Malaysia, Indonesia, Pakistan, Yemen, Sri Lanka and Kenya; it is estimated that these markets have a total population of more than 1.8 billion people. 

Phase One History
To unlock this new market segment, the GSM Association brought together an Initial Working Group of its operator members that serve emerging markets and facilitated the first 'Invitation to Strategic Partnership'. In doing so, the GSM Association was able to create critical mass. 

This invitation resulted in dialogue with 18 vendors and in February 2005, Motorola was chosen to supply the first GSMA endorsed handset for this new segment. Motorola performed best against the criteria, offering a family of products built on its Ultra-Low Cost C114 platform that is optimised for the durability, long talk time, and design preferences of emerging markets. Motorola delivered these products at a price point below $40 (ex factory). 

Analysts say these handsets have allowed far more people to take advantage of mobile communications. For example, the arrival of the Motorola C115 in India helped boost Indian GSM operators' monthly net customer additions by one third to 1.6 million in June, according to a report by Lehman Brothers. 

 

Sunday, September 28, 2008

Injectable Tissue Engineering


Injectable Tissue Engineering

Every year, more than 700,000 patients in the United States undergo joint replacement surgery. The procedure-in which a knee or a hip is replaced with an artificial implant-is highly invasive, and many patients delay the surgery for as long as they can. Jennifer Elisseeff, a biomedical engineer at Johns Hopkins University, hopes to change that with a treatment that does away with surgery entirely: injectable tissue engineering. She and her colleagues have developed a way to inject joints with specially designed mixtures of polymers, cells, and growth stimulators that solidify and form healthy tissue. "We're not just trying to improve the current therapy," says Elisseeff. "We're really trying to change it completely."

Elisseeff is part of a growing movement that is pushing the bounds of tissue engineering-a field researchers have long hoped would produce lab-grown alternatives to transplanted organs and tissues. For the last three decades, researchers have focused on growing new tissues on polymer scaffolds in the lab. While this approach has had success producing small amounts of cartilage and skin, researchers have had difficulty keeping cells alive on larger scaffolds. And even if those problems could be worked out, surgeons would still have to implant the lab-grown tissues. Now, Elisseeff, as well as other academic and industry researchers, are turning to injectable systems that are less invasive and far cheaper. Many of the tissue-engineering applications to reach the market first could be delivered by syringe rather than implants, and Elisseeff is pushing to make this happen as soon as possible.

Elisseeff and her colleagues have used an injectable system to grow cartilage in mice. The researchers added cartilage cells to a light-sensitive liquid polymer and injected it under the skin on the backs of mice. They then shone ultraviolet light through the skin, causing the polymer to harden and encapsulate the cells. Over time, the cells multiplied and developed into cartilage. To test the feasibility of the technique for minimally invasive surgery, the researchers injected the liquid into the knee joints of cadavers. The surgeons used a fiber-optic tube to view the hardening process on a television monitor. "This has huge implications," says James Wenz, an orthopedic surgeon at Johns Hopkins who is collaborating with Elisseeff.

While most research on injectable systems has focused on cartilage and bone, observers say this technology could be extended to tissues such as those of the liver and heart. The method could be used to replace diseased portions of an organ or to enhance its functioning, says Harvard University pediatric surgeon Anthony Atala. In the case of heart failure, instead of opening the chest and surgically implanting an engineered valve or muscle tissue, he says, simply injecting the right combination of cells and signals might do the trick.

For Elisseeff and the rest of the field, the next frontier lies in a powerful new tool: stem cells. Derived from sources like bone marrow and embryos, stem cells have the ability to differentiate into numerous types of cells. Elisseeff and her colleagues have exploited that ability to grow new cartilage and bone simultaneously-one of the trickiest feats in tissue engineering. They made layers of a polymer-and-stem-cell mixture, infusing each layer with specific chemical signals that triggered the cells to develop into either bone or cartilage. Such hybrid materials would simplify knee replacement surgeries, for instance, that require surgeons to replace the top of the shin bone and the cartilage above it.

Don't expect tissue engineers to grow entire artificial organs anytime soon. Elisseeff, for one, is aiming for smaller advances that will make tissue engineering a reality within the decade. For the thousands of U.S. patients who need new joints every year, such small feats could be huge. - Alexandra M. Goho

Others in 
INJECTABLE TISSUE ENGINEERING 
RESEARCHER PROJECT Anthony Atala 
Harvard Medical School Cartilage Jim Burns 
Genzyme Cartilage Antonios Mikos 
Rice U. Bone and cardiovascular tissue David Mooney 
U. Michigan Bone and cartilage

Wireless Sensor Networks



Wireless Sensor Networks

Great Duck Island, a 90-hectare expanse of rock and grass off the coast of Maine, is home to one of the world's largest breeding colonies of Leach's storm petrels-and to one of the world's most advanced experiments in wireless networking. Last summer, researchers bugged dozens of the petrels' nesting burrows with small monitoring devices called motes. Each is about the size of its power source-a pair of AA batteries-and is equipped with a processor, a tiny amount of computer memory, and sensors that monitor light, humidity, pressure, and heat. There's also a radio transceiver just powerful enough to broadcast snippets of data to nearby motes and pass on information received from other neighbors, bucket brigadestyle.

This is more than the latest in avian intelligence gathering. The motes preview a future pervaded by networks of wireless battery-powered sensors that monitor our environment, our machines, and even us . It's a future that David Culler, a computer scientist at the University of California, Berkeley, has been working toward for the last four years. "It's one of the big opportunities" in information technology, says Culler. "Low-power wireless sensor networks are spearheading what the future of computing is going to look like."

Culler is on partial leave from Berkeley to direct an Intel "lablet" that is perfecting the motes, as well as the hardware and software systems needed to clear the way for wireless networks made up of thousands or even millions of sensors. These networks will observe just about everything, including traffic, weather, seismic activity, the movements of troops on battlefields, and the stresses on buildings and bridges-all on a far finer scale than has been possible before.

Because such networks will be too distributed to have the sensors hard-wired into the electrical or communications grids, the lablet's first challenge was to make its prototype motes communicate wirelessly with minimal battery power. "The devices have to organize themselves in a network by listening to one another and figuring out who can they hear...but it costs power to even listen," says Culler. That meant finding a way to leave the motes' radios off most of the time and still allow data to hop through the network, mote by mote, in much the same way that data on the Internet are broken into packets and routed from node to node.

Until Culler's group attacked the problem, wireless networking had lacked an equivalent to the data-handling protocols that make the Internet work. The lablet's solution: TinyOS, a compact operating system only a few kilobytes in size, that handles such administrative tasks as encoding data packets for relay and turning on radios only when they're needed. The motes that run TinyOS should cost a few dollars apiece when mass produced and are being field-tested in several locations from Maine to California, where Berkeley seismologists are using them to monitor earthquakes.

Anyone is free to download and tinker with TinyOS, so researchers outside of Berkeley and Intel can test wireless sensor networks in a range of environments without having to reinvent the underlying technology. Culler's motes have been "a tremendously enabling platform," says Deborah Estrin, director of the Center for Embedded Networked Sensing at the University of California, Los Angeles. Estrin is rigging a nature reserve in the San Jacinto mountains with a dense array of wireless microclimate and imaging sensors.

Others are trying to make motes even smaller. A group led by Berkeley computer scientist Kristofer Pister is aiming for one cubic millimeter-the size of a few dust mites. At that scale, wireless sensors could permeate highway surfaces, building materials, fabrics, and perhaps even our bodies. The resulting data bonanza could vastly increase our understanding of our physical environment-and help us protect our own nests. - Wade Roush

Others in 
WIRELESS SENSOR NETWORKS 
RESEARCHER PROJECT Gaetano Borriello 
U. Washington; Intel 
Small embedded computers and communications protocols Deborah Estrin 
U. California, Los Angeles 
Networking, middleware, data handling, and hardware for distributed sensors and actuatorsMichael Horton 
Crossbow Technology 
Manufacture of sensors and motes Kristofer Pister 
U. California, Berkeley Millimeter-size sensing and communication devices

Saturday, September 27, 2008

The World's Most Extreme Photography Equipment

The World's Most Extreme Photography Equipment

Posted Tuesday, 8 April 2008 by Lars Hasvoll Bakke in TechnologyPhotography
There are several categories of camera gear available: there's the sensible, the desirable and then there's the stuff that you'd never even imagined. Here's a selection of equipment that most definitely belongs in the latter category.

Sigma APO 200-500 F/2.8

Sigma 200-500mm Super Telephoto lens
Image: Sigma

Perhaps the most "sensible" of the items presented in this list, this is nevertheless one of the heftiest tele zoom lenses for SLR cameras around. While the zoom range of 200-500mm is nothing new or exciting, it's the maximum aperture of an incredible f/2.8 throughout the focal range that makes this such a special lens.

While a lot of tele lenses have a distinct cannon barrel look, Sigma have apparently done all they can to enhance that trait, giving the lens a leafy green finish. The end result is an extremely fast tele zoom lens that could easily be confused with a surface-to-air missile launcher.

» www.sigmaphoto.com



Zeiss Apo Sonnar T* 1700 mm F4

Zeiss Apo Sonnar T* 1700 mm F4
Image: Zeiss

For people who have been into photography for a while, the name Carl Zeiss means top of the line optical quality, usually with a matching price tag. While continuing to produce their top-of-the-line optics for various camera systems, Zeiss have more recently also begun cooperating with Nokia and Sony, making optics for their mobile phones and digital cameras.

Two years ago, the company presented a remarkable one-off tele lens, reportedly custom built for a wealthy Qatari. Weighing in at 256 kilos, it's is a 1700mm f/4 lens designed for medium format (which roughly equals 750mm in 35mm SLR format). The monster bears more than a fleeting resemblance to a jet engine; given the size the 'super tele lens' labeling on the side seems a little superfluous – it isn't very likely that it would be mistaken for an average 70-200mm, after all.

The little black lump at the end is your average 6x6 medium format camera, in itself a quite bulky piece of equipment, but completely dwarfed by the Zeiss lens. Upon it's unveiling, it was said to be the largest non-military tele lens in the world. One wonders what the largest military tele lens might look like.

Drawing from their experience in manufacturing large telescopes and instruments for astronomical sciences, Zeiss had to develop an entirely new focussing system, Due to the massive size of the glass elements, the lens had to be equipped with extremely powerful focussing motors, capable of moving all that heavy glass around. The rear end of the lens has a dedicated LCD monitor built in to display focussing distance, aperture etc. No price has been published, but Zeiss hinted at a price of at least several million Euros.

The intended use for the lens is reportedly "antelope photography". This doesn't immediately strike one as the kind of kit you want to bring along on a safari to photograh fast moving and easily startled animals – hiding in the bushes is certainly off the agenda – but the uncompromising construction is said to allow the lens to autofocus as fast as a 'regular' telephoto lens.



Polaroid 20x24'' Camera

Polaroid 20x24'' Camera
Image: Joyce Vanman / www.mammothcamera.com

The average film camera has for the last 50 years used either 120 rollfilm or so-called 135 film, 135 being by far the most commonly used type. Each frame of 135 film is 36x24 milimeters, while the average consumer dSLR camera today has a sensor size of approximately 60% of this, around 23x15 milimeters. The sensors in digital compacts are much smaller still. Within this tiny space, the camera and its lens has to compress the vast amount of detail visible to the human eye. The resulting replications of reality are far from perfect, they can't be.

One way of partially overcoming this problem is quite simply to use larger film formats or digital sensors. Within the digital realm, the 48x36mm sensor size available in certain medium format digital backs is pretty much as large as it gets without substantial R&D resources (like what a major corporation, national government or army might have at their disposal).

In film, things are a bit simpler. While constructing huge digital sensors is a challenging task, creating a huge sheet of film or photographic paper is really – simply put – just a matter of making it bigger than usual, and building a camera large enough to house it.

The biggest 'instant' camera I know of is Polaroid's 20x24'' behemot. It's 1.5 meters tall and weighs in at 106 kilos. The Polaroid paper sheets used in this camera is, as the name implies, 20 x 24'', which equals 50x60 cm. Keeping in mind that the aforementioned 135 film is a mere 3.6 x 2.4 cm, it's easy to see why such a larger-than-life camera would be capable of producing prints of far superior detail compared to smaller formats.

A number of these cameras are available for hire, complete with a dedicated studio space, in San Fransisco, New York and Prague. Following Polaroid's recent announcement that they will completely cease the production of their signature instant film, there is a certain risk that these cameras will be destined for the museum soon.

» www.polaroid.com



Seitz 6x17'' digital panoramic camera

Seitz 6x17 Panoramic Camera'' digital panoramic camera
Image: Seitz

Instead of the common digital camera sensor which records the entire scene at once, the Seitz 6x17'' uses a scanner to literally scan the view through the lens. The end result is 160 megapixel images in a panoramic format. It does the job a bit faster than your average flatbed scanner though, recording a full-sized frame (21 250 x 7500 pixels) in two seconds. It's big, it's heavy (5 kilos if you wish to use it outside a studio) and quite silly, but it turns out huge, amazing photos – and it should, costing as it does $42 000.

» www.roundshot.ch



Hasselblad H3DII

Hasselblad H3DII
Image: Hasselblad

Swedish camera manufacturer Hasselblad (or "'blad" as they are often called) has for a long time been ranked among the very best when it comes to cameras. Indeed, NASA's space programme chose Hasselblad as their camera provider, and three Hasseblads where carried aboard the Apollo 11 mission, perhaps the company's most famous feat.

Priced at around $40 000, a Hasselblad H3DII with a 39 megapixel backpiece is one of the most expensive photo kits available in ordinairy retail sale. It's fairly large, fabulously expensive and capable of creating huge, extremely detailed image files with its 39 megapixel, 48x36mm sensor.

For photographing your cat, you can probably make do without this camera, but if you're shooting supermodels for Vogue, you might just need a camera of this caliber. If you ever watch TV shows like "Top Model", there's a fair chance you'll see a 'blad involved in a shoot every now and then.

» www.hasselblad.com



Canon EF 1200mm f/5.6L USM

Canon 1200mm f5.6 lens
Image: Robert

While the 1700mm lens mentioned earlier is all fine and dandy if you've got a truck to mount it onto, some may prefer a more lightweight, nimble sollution. Weighing a mere 16.5 kilos and being only 83 centimeters long (without the bucket-like hood), this delicate little flower will nevertheless magnify faraway objects (or perhaps more relevant, faraway people) to a degree that will leave little to the imagination. To my knowledge, this is the longest focal length available to autofocus SLR cameras without using any extra magnifiers.

Due to its size, limited area of use and robust price tag, it has only been available from Canon built to order, and to date they have apparently produced fewer than 20 samples of this lens. The company recently announced that they would be slashing the 1200mm from their catalogue, so if you want one, better be quick about it.

The suggested price of the lens upon unveiling in 1993, converted to present day money puts it at apx. $120 000, or the cost of "a small sports car" which is the most common price comparison given for the lens.

» Canon Camera Museum



The Gigapxl Project

Launched by physicist Graham Flint, the Gigapxl Project set about creating a camera system that would allow the creation of photos with billions of pixels (or thousands of megapixels if you like). The Gigapxl Project employs a large format camera with 9x18'' film sheets to shoot big panorama photos of places of interest, primarily in the USA.

The film sheets are then scanned using a highly sophisticated technique, resulting in digital files that contain the equivalent resolution of several gigapixels. Though the original aim was to reach a single gigapixel (1000 megapixels), the project website now claims it is able to create images with a resolution of aproximately 6 gigapixels.

Nowadays, camera manufactureres like to stick very dense sensors into tiny consumer cameras with mediocre optics, which results in images that despite the 12 or 14 megapixel resolution aren't really any better than 4 megapixels. It's a way of cheating customers who don't know much about digital photography as most people seem to think that more megapixels equals better photos, which is a truth with great limitations.

It would be easy to think that the Gigapxl Project is much the same, just a whole lot of pixels wasted on creating huge digital files that contain little in terms of actual details. However, at the project website, it's made very clear that the technology and knowledge put into these photos means that the 2, 4 or 6 gigapixel photos they produce are in fact as detailed as their pixel size suggests. But why take my word for it? Check out the amazing images in their gallery for an idea of what I'm talking about!

» www.gigapxl.org



Cameratruck

The Camera Truck
Image: Cameratruck

A pinhole camera is perhaps the simplest kind of camera there is. You make a tiny hole in an otherwise light-sealed container, but in a sheet of film or other photo-sensitive media, point it towards what you want to photograph and let light pass through the hole for a set period of time. The reflected light will, just like the light reflected through the lens of an ordinairy camera result in a photo, be it through a digital sensor or on a piece of film.

Pretty much any container can be made into a pinhole camera; the more outlandish the better – the Pringles Cam,Spam Cam and Trashcan Cam are just a few examles I've seen.

But it could also be built out of a box truck, which is exactly what an inventive bunch of spaniards and americans did. By drilling a hole in its side and attaching huge sheets of photographic paper (100x30 cm) to the inside of a truck, they created a huge mobile pinhole camera.

» cameratruck.es



"The Great Picture"

The Great Picture Hangar
The photographic canvas of "The Great Picture"
Image: The Legacy Project

But why stop at a truck, when you could convert an entire airplane hangar into a pinhole camera? While the Cameratruck above is touted as the world's largest mobile camera, this hangar is certified by the Guinness Book of Records as the largest camera in existance, albeit immobile.

It's basically an old hangar building at the disused El Toro Marine Corps Air Station in Southern California, which has been made light tight to ensure no light gets in except through the little pinhole in one of the hangar's sides. To create the image alluringly described as "The Great Picture", a huge sheet of made-to-order canvas was suspended inside the hangar and coated in 80 litres of Liquid Light, making it photosensitive.

The exposure time of the world's largest camera was set to aproximately 35 minutes, after which the canvas was chemically developed (in a pool of 2300 litres of developer – photography at this scale does not come cheap!) into the world's largest photo, 313 square meters (3375 square feet) in size.

Compare that to a standard 135 film frame, which you might remember from earlier on in the article is 36x24 milimeters, equalling 8.64 square centimeters, or 0.00864 square meters.

Cell Phone Or Mobile Phone VoIP Communication Technology is Changing the World of Business

Cell phone VoIP has taken the world of communication to a new level enabling anyone to call from anywhere to any part of the world.

VoIP or voice over the internet has become the most popular communication trend. It is needless to explain to someone the functionality of VoIP since we have been bombarded with unlimited VoIP commercials showing us the cost effectiveness of VoIP causing us to expand the knowledge of this technology. It goes without saying that VoIP is the most cost effective means of voice communication.

However, the VoIP phone system has provided minimal benefit to busy professionals and business people on the go. This is because VoIP is not configured to work with mobile phones. VoIP mainly work with land lines like home and office phones and this leave some people with limited option to communicate with their recipients.

In order to reap the same VoIP saving benefit people who are constantly in motion have adapted the WiFi technology. It is a very convenient technology that allows you to make a call from your cell phone provided you are close to a WiFi hotspot. WiFi phones use the same wireless network technology that computer use making VoIP a lot more portable. One dilemma of WiFi phones is that the hotspot network is very limited disabling users in most cases. Some WiFi hotspots require a web browser to sign in. Phones without a build-in browser are useless in this location.

Can communication technology be upgraded to make WiFi calls without searching for hotspots? Certainly it is already in place most people who rely on WiFi Cell phone calls are crippled by insufficient WiFi networks and a short distance range. You can only make a WiFi call at a range 500 meters to a hotspot.

The latest popular Cell phone VoIP technology has no range limitation, one can make a call from anywhere to any country in the world with a push of a button. VoIP cell phone is a great tool to business people conducting business internationally, student studying overseas, soldiers and their families as well. As communication industry evolve we are likely to see more wonders.

Peter Benson has been helping business owners communicate effectively using the latest technology in the market. http://www.1button2wifi.com

Other Recent EzineArticles from the Communications:VOIP Category:

 

This article has been viewed 145 time(s).
Article Submitted On: June 09, 2008



Please Rate This Article: 1 Votes | Average: 1 out of 51 Votes | Average: 1 out of 51 Votes | Average: 1 out of 51 Votes | Average: 1 out of 51 Votes | Average: 1 out of 5 (1 votes, average: 1 out of 5)