Saturday, August 31, 2019

Essay and referencing

The three communication theories I have applied to provide the most insight into understanding the dynamics of the observed conversation between two people are; the transmission model (Shannon & Weaver 1949), Fuller's ecological model (Fouler 2004), and an expanded model of communication (Munson 2012). In this essay I have used a conversation I had listened to which was between my friend (from here on will be named George) and his friend (from here on will be named Mark). Problems that arose during the conversation will be explained as well as how George and Mark overcame them.From there the essay will compare and contrast the three communication theories and decide which of these best simplify the conversation. Complexities of the conversation I was asked by George to take him down to the pub to meet up with Mark for a couple of games of pool, whilst chatting over a cold beer. George is nearly completely illiterate due to him leaving school at the age of 12, moving to the Northern t erritory and working on a cattle farm up until the age of 19, where he then moved back to Lissome.Mark is completely deaf and has next to no ability to lip read (this I had not known until I met him). Both George and Mark do not know sign language of any kind. George and has learnt to use abbreviations in text messages, which is about the extent of his written language capabilities. Problems that arose George had been avoiding this meeting because he finds the conversations very difficult, and this usually leads to heated discussions, especially when the conversations centered on Mark's granddaughter (which George has always had affectionate feelings for but never acted upon).Both George and Mark have their own perceptions on what is going on in her life, which has, and still causes either Mark or George to have expectations from the other, and creates selective perceptions (Withes 2009). This introduces psychological noise and detracts from the meanings of some of the messages in t he conversation. A one way lack of tone and inflection in the voice to communicate feelings and emotions more clearly can provide misunderstanding in the meaning of some messages. Language barrier from non-literate to literate people poses the greatest barrier to messages both to and from each other.Feedback is restricted to kinesics emblems, regulators, and illustrators (De Vito 2001). George aged 2 and Mark aged 64 both have different educational backgrounds in which written communication differs. Text messaging or testing (Shaw et al. 2007) can be a major source of misinterpretation, though Mark has been using his mobile phone for some time now. This leads to the conclusion that this increases the communication abilities of how Mark can interact with George, though there is still the written language barrier between them (Kumara et al. 2011).How George and Mark overcome these problems The over emphasis of kinesics emblems, regulators, and illustrators (De Vito 2001) had o be used as feedback due to the lack of language being used by George. Facial expressions become very important for Mark and George to try to convey their own, and comprehend each other's emotions, from understanding to frustration. Increased eye contact which would make most people more nervous and defensive (De Vito 2001) become a highly prized resource for feedback as well it had helped regulate the control of the conversation.Shannon and Weaver's transmission model (Shannon & Weaver 1949) seems to be the simplest model, and therefore maybe the est. for most situations; however, it lacks the detail in which the complexities of this particular conversation pose, and that need to be addressed, in particular the noise sources. Figure 1 : Transactional model (Source: Shannon & Weaver 1949) Fuller's ecological model is an elaboration of (Alleles 1948) model of â€Å"Who, says what, in which channel, to whom, with what effect† (Fouler 2004) though it takes into account of the use of dif ferent languages using modern mediums.This model is an excellent model but focuses on more of the use of the language and the media it is conveyed in and to so much on person to person communications. Figure 2: An Ecological Model of the Communication (Source: Fouler 2004) Union's expanded model of communication is more complex than Fuller's as well as Shannon and Weaver's models, though it is more appropriate to this conversation had between George and Mark due to the fact it shows that in order for the communication to happen the sender must pre-edit and then encode the message pass the message onto the receiver where he decodes and edits the message.Munson also takes into account the use of mechanical, behavioral and semantic actors of encoding, and understands that if the message is to be understood the receiver must be able to decode the message. This is particularly relevant, and highly important to this conversation between George and Mark due to the factors mentioned before. Context Figure 3: An expanded model of communication (Source: Munson 2012) Conclusion I have found Shannon and Weavers transactional model too simple and cannot evaluate the complexities of this situation, and that Fuller's model too broad and not able to focus on the problems that need to be addressed.Therefore, I believe hat Union's expanded model of communication is the most adequate to use out of the three models that were written about because Union's model has shown how a message from George is first pre-edited (thoughts), then encoded (written on paper), passed on through noise (physical, psychological as well as expectations and selective perception), decoded by the receiver (reading Georges writing) and then finally edited to Mark's own meaning and interpretation.This explains how there was some heated discussions in the past, and will continue to be so until they are able to actively listen' to each other before they place their own selective perceptions and expectations on their conversation.

Friday, August 30, 2019

Investigating Factors That Affect the Rate of Reaction

Investigating Factors that Affect the Rate of Reaction of the Decomposition of Hydrogen Peroxide Emilio Lanza Introduction- In this experiment, the rate of reaction, calculated in kPa sec-1, of the decomposition of hydrogen peroxide will be investigated to see how the change in concentration of hydrogen peroxide and the change in temperature affect the rate of reaction. The data will be collected by measuring the gas pressure. The product of Hydrogen Peroxide is oxygen in a gas state thus it is mandatory to use the gas pressure sensor.By calculating the difference of the gas pressure divided by the amount of time from the raw data collection it is able to find the rate of reaction of the decomposition of hydrogen peroxide. * Control Variable- 1mL of yeast (catalyst) is being used in every trial. The volume of H2O2 is always 4 mL, even though the concentration changes and the sizes and type of test tube was the same because it can change the pressure. * Independent Variable- Concentra tion of H2O2 (M) and the temperature (Â °C) * Dependent Variable- The rate of reaction of the decomposition of hydrogen peroxide > rate of reaction = ?Pressure (kPa)Time (sec) . * Research Question- it is needed to calculate the rate of reaction (kPa sec-1) of the decomposition of H2O2 to understand how different factors such as the change in concentration and the change in temperature of H2O2 affect the rate of reaction. Materials and Method- Materials: * 0. 5 M Yeast solution (the catalyst) – 15 mL * 45 mL of 3 % H2O2 solution * A thermometer * A computer with LoggerPro Program. * A Vernier computer interface * A Vernier Gas Pressure Sensor * A 1 liter beaker * A match to light up the bunsen burner * A tripod Two 10 mL test tubes * Two 10 mL pipette * Distilled water – 15 mL * A matt/cover that is fire resistant * 700 mL of room temperature water from a sink * A one-hole rubber stopper with stem * Two test tube holders * Two 10 mL graduated cylinders * A bunsen burn er * Two solid rubber stopper * Plastic tubing containing two Luer-lock connectors * A one-hole rubber stopper with stem * A test tube rack Procedure: Part 1 of the experiment: Decomposing 3 % of H2O2 solution with 0. 5 Yeast at about 30Â °C 1. Take the 1-liter beaker and add 700 mL of room temperature water.Take the tripod, place a matt/cover that is fire resistant on top of the tripod and onto the matt/cover place the 1-liter beaker that has been filled up with 700 mL of room temperature water from a sink. 2. First hook the rubber tube from the Bunsen burner to a gas source, then take a match and turn on the gas source. Once the gas is on light the match and then light the bunsen burner. (MAKE SURE TO NOT BURN YOURSELF)!! 3. Place the lit bunsen burner underneath the tripod so it can begin to heat the 1 liter beaker with the 700 mL of room temperature water from the sink. . Insert a thermometer into the 1 liter beaker that is being heated and adjust the flame of the bunsen burner so it will heat the water to a temperature of about 30Â °C. 5. Take the 10 mL pipette and the 10 mL-graduated cylinder use the pipette and transfer 4 mL of H2O2 and using a 10 mL pipette transfer 4 mL of H2O2 from a container into the 10 mL graduated cylinder. 6. Take a 10 mL test tube and add fill 4 mL of H2O2 from the 10 mL graduated cylinder into the 10 mL test tube. Once that is done, take a rubber stopper and seal the 10 mL test tube containing the H2O2.Use the test tube holder to hold the test tube into the 1 liter beaker the is being heated to a temperature of about 30Â °C. Make sure that the majority of the test tube is submerged in water. 7. Using the other 10 mL pipette, transfer 1 mL of 0. 5 M Yeast into the other 10 mL graduated cylinder. From this graduated cylinder, transfer the 0. 5 M Yeast to a new 10 mL test tube; seal the test tube with a new solid rubber stopper. With the other test tube holder, place this test tube containing 1 mL of 0. M Yeast into 1 liter be aker that is currently being heated to a temperature of about 30Â °C. 8. Turn on a computer and start the LoggerPro Program. 9. Connect the Gas Pressure Senor to Channel 1of the Vernier computer interface and with the correct cable attach the Vernier computer interface to the computer. 10. Take the plastic tubing with the Leur-lock connectors at either end of the tubing, connect the tubing to the base on the one-hole rubber stopper and the other end of the plastic tubing, it must be connected to the white stem on the end of the Gas Pressure Sensor called a Luer-lock. MAKE SURE THE PLASTIC TUBING TIGHTLY SECURED OR THE GAS WILL ESCAPE AND IT WILL LEAD IT IN ACCURATE READINGS). 11. Once the LoggerPro Program has been opened make sure that the label on the x-axis is time in seconds and that the units on the y-axis is pressure in kPa before collecting the data. 12. Leave the test tubes in the water bath for at least two minutes so that the solutions in the test tube have a temperature of around 30Â °C. Once the water is about 30Â °C, record this temperature into a data table. When two minutes have passed by, commence the reaction and collect the pressure data.Remove both test tubes from the water by holding onto the test tube holder, place them in a test tube rack and remove each seal from the test tubes. Transfer the yeast solution from its test tube into the test tube containing H2O2 solution and shake lightly to mix the two solutions together. 13. As quick as possible seal the test tube with the one-hole stopper connected to the Gas Pressure Sensor and place the test tube back into the water by holding the test tube with the test tube holder. Next click collect data on the LoggerPro Program to begin collecting data. THE LAST TWO STEPS ARE CRUCIAL AND MUSTBE DONE AS QUICK AS POSSIBLE TO AVOID ANY EXTERNAL INFLUENCES). 14. It is needed to collect the data for three minutes once three minutes is up, carefully remove the test tube from the water by holding onto the test tube holder and set it in the test tube rack. Next slowly and carefully begin to tale out the stopper from the test tube allowing the gas pressure to escape. 15. Store the results from the first trial by selecting Store Latest Run from the Experiment menu. After doing this a table of data and the graph will be saved.Then make sure to clean and trash the solution that is in the test tube. Repeat the first part another two more time so you can have three trials in total. Then print the graph and the full data table from each trial. Part 2 of the experiment: Decomposing 1. 5 % of H2O2 solution with 0. 5 Yeast at about 30Â °C 1. Take a 10 mL graduated cylinder and using a 10 mL pipette (make sure you are using the same pipette for the H2O2 as in previous trials and don’t interchange this pipette for the one being used with Yeast) fill 2 mL of H2O2 from the same container like it was done in part 1 into the 10 mL graduated cylinder.Once that is done insert 2 mL of distil led water as well into the graduated cylinder containing H2O2. 2. Now grab the 10 mL test tube (which has been thoroughly washed with water) and insert the 4 mL of H2O2 which has been mixed with the distilled water from the 10 mL graduated cylinder into the 10 mL test tube. Then take the 10 mL test tube and with the H2O2 seal it with a rubber stopper. Use the test tube holder so you can place the test tube in the 1 liter beaker that is being heated to 30Â °C. Be sure that the test tube is deep enough in the 1 liter beaker. . Using the other 10 mL pipette, take the 1 mL of 0. 5 M yeast and our it into the other 10 mL graduated cylinder. Then grab the graduated cylinder and put the 0. 5 M yeast to a new 10 mL test tube; close the test tube so no air comes in with a new rubber stopper. With the other test tube holder, place this test tube containing 1 mL of 0. 5 M KI into 1 liter beaker that is currently being heated to a temperature of about 30Â °C. Repeat steps 13-18 from part I. P art 3 of the experiment: Decomposing 0. 75 % of H2O2 solution with 0. 5 Yeast at about 30Â °C 1.Take a 10 mL graduated cylinder and using a 10 mL pipette (make sure you are using the same pipette for the H2O2 as in previous trials and don’t interchange this pipette for the one being used with KI) transfer 1 mL of H2O2 from the same container like in part I into the 10 mL graduated cylinder. Add 3 mL of distilled water into the graduated cylinder containing H2O2. Mix the solution gently. 2. Take a 10 mL test tube (which has been cleaned after previous trials) and transfer 4 mL of H2O2 mixed with distilled water from the 10 mL graduated cylinder into the 10 mL test tube.Then seal the 10 mL test tube containing the H2O2 with a solid rubber stopper. With one of the test tube holders, place the test tube into the 1 liter beaker that is currently being heated to a temperature of about 30Â °C. Make sure that the majority of the test tube is submerged in water. 3. Using the other 1 0 mL pipette, transfer 1 mL of 0. 5 M yeast into the other 10 mL graduated cylinder. From this graduated cylinder, transfer the 0. 5 M yeast to a new 10 mL test tube; seal the test tube with a new solid rubber stopper.With the other test tube holder, place this test tube containing 1 mL of 0. 5 M yeast into 1 liter beaker that is currently being heated to a temperature of about 30Â °C. Repeat steps 13-15 from part 1. Part 4 of the experiment: Decomposing 3. 0 % of H2O2 solution with 0. 5 Yeast at about 35Â °C 1. For this part repeat the steps 6-7 and 13-15 from part 1. The only thing that is needed to be changed is that the water needs to be about 35Â °C. Part 5 of the experiment: Decomposing 3. 0 % of H2O2 solution with 0. 5 Yeast at about 40Â °C 1. For part 5 redo the steps 6-7 and 13-15 from part 1.The only thing that is needed to be changed is that the water needs to be about 40Â °C. Steps once all the five parts of the experiment are complete 1. Now look at the data table that has been filled in for each trial from each and calculate the average reaction rate (kPa sec-1) of the decomposition of H2O2 that occurred over 3 minutes for each part and put it into the analysis table 2. Insert the concentration of H2O2 and yeast from each part into the analysis table as well. 3. Make sure to find the average temperature (Â °C) and include it in the analysis table. . Then compare and contrast the different effects the rate o reaction caused by the change in concentration of H2O2 and in the change of temperature. (The data table is an example of the data table that will be printed from the computer after each trial and part is done from LoggerPro Progam. The only thing is that it will record the gas pressure until 3 minutes. Again only an example how it should look like). The Gas Pressure from the Decomposition of H2O2 After Every Second| Time (sec)| Gas Pressure (kPa)| 1| | 2| | 3| | 4| | 5| | 6| | 7| | 8| | | | 10| | Data Analysis Table for the Decompositio n of H2O2| Part #| Average Temperature (Â °C)| Average Rate of Reaction (kPa sec-1)| Concentration of H2O2 in %| Concentration of Yeast (M)| Part 1| | | | | Part 2| | | | | Part 3| | | | | Part 4| | | | | Part 5| | | | | The Temperature (Â °C) of the Water During Each Part of the Lab and Each Trial | Parts of Experiments| Trial 1| Trial 2| Trial 3| Part 1 Temperature (Â °C)| | | | Part 2 Temperature (Â °C)| | | | Part 3 Temperature (Â °C)| | | | Part 4 Temperature(Â °C)| | | | Part 5 Temperature (Â °C)| | | |

Thursday, August 29, 2019

Nintendo Strategy

Simplifying the design and use of the WI system allowed the developers to create the perfect entry strategy for their new target market with great success. In the first half of 2007, the Nintendo WI sold more units in the united States than the Oxbow 360 and Plantation 3 (ASS). In the first quarter of 2008, Nineteen's net sales were up over 20% from the same quarter the previous year and WI was outselling Its seventh generation home system rivals the Sony Plantation 3 and the Oxbow 360.Nineteen's net income in the same quarter was up over 30% from the same quarter the previous year due to the intended strength of WI and Nintendo ADS hardware and software sales. Finally, most believe that both Sony and Microsoft had been traditionally operating at a loss with anticipated gains In software and game sales while Nintendo enjoyed operating profits. Although initially surprised by Wig's resounding broad appeal, Sony and Oxbow were prepared for a series of competitive counter moves designed to attack Wig's popularity going into the 2008 holiday season.Some of your students may have received early versions of game consoles such as Nintendo 64, Saga Genesis, or Plantation or handheld games such as Nintendo Gamey as gifts when they were children. Given he increasing popularity, sophistication and complexity of consoles, it's also likely Tanat a majority AT your students currently own one AT ten inhumane game or consoles mentioned in the case.The case will allow you to illustrate concepts from Chapters 3 – 6 if used as a stand alone case or can be paired with Case 11–Competition in the Video Game Console Industry if you prefer to use the case to focus on the strategy options presented in Chapters 5 and 6. The case provides sufficient information to allow students to prepare a review of the industry dominant economic characteristics, Lully examine the competitive forces at play in the video game industry, consider the industry driving forces and key success f actors, and examine Nineteen's internal situation and recent financial performance.The case also allows students to understand how focused differentiation strategies are capable of yielding above- average profit margins without a reliance on premium pricing. The case also allows students to understand the appeal of Nineteen's Blue Ocean strategy and observe how the company has turned a first mover advantage into what appears to be a sustainable advantage. This teaching note reflects the thinking and analysis of the case authors, Professor Lou Marino and Sally Garrett, both of the University of Alabama.We are most grateful for their insight, analysis and contributions to how the case can be taught successfully. 717 718 Case 12 Nineteen's Strategy for the WI-?Good Enough to Beat Oxbow 360 and Plantation 3? Finally, the case's strong decision focus allows students to consider what Nintendo must do next to ultimately win the battle among next generation video game consoles. To give stud ents guidance in what to do and think about in preparing theNintendo case for class discussion, we strongly recommend providing class members with a set of study questions and insisting that they prepare good notes/answers to these questions in preparing for class discussion of the case. To facilitate your use of study questions and making them available to students, we have posted a file of the Assignment Questions contained in this teaching note for Nintendo on the student section of the publisher's Online Learning Center for the 17th edition (www. Meme. Com/Thompson). You should be aware that there is a set of study questions posted in the student LLC for each of the 26 cases included in the 17th edition. ) In our experience, it is quite difficult to have an insightful and constructive class discussion of an assigned case unless students have conscientiously have made use AT pertinent core concepts Ana analytical tools In preparing ostentatious answers to a set of well-conceived study questions before they come to class. In our classes, we expect students to bring their notes to the study questions to use/refer to in responding to the questions that we pose.Moreover, students often find having a set of study questions is useful in helping them prepare oral team presentations and Ritter case assignments-?in addition to whatever directive questions you supply for these assignments. Hence, we urge that you insist students spend quality time preparing answers to study questions-? either those we have provided or a set of your own questions. There is a 2:48 second video that accompanies this case that discusses how the WI has expanded the market for video games by appealing to non-traditional gamers.It is best to show the video at the very beginning of the class discussion. The case can be used effectively for a written assignment or oral presentation. Our recommended questions for written assignments are as follows: 1. You have recently been hired by Nintendo o f America as an analyst and have been assigned to its WI strategy group. During your first meeting with the strategy group, the team leader asked that you prepare an analysis of the video game console industry for distribution at the next meeting.Please prepare a 5-6 page report that includes a description of the industry dominant business and economic characteristics, evaluates competition in the industry, assesses industry driving forces, and lists industry key success factors. Your report should also include a tragic group map of the entire video game industry and specific strategy recommendations that will allow the WI to remain the leading next generation console. 2. As a newly hired Nintendo of America retail representative, you have been asked to Join a cross functional strategy group.The group's charge from upper level management is to make a set of recommendations designed to further solidify the company's number-one ranking in the industry. Your recommendations to upper ma nagement should be in the form of a 2 – 3 page executive summary and must be supported with a complete industry analysis, company situation analysis, and uncial analysis. Each recommendation should be supported by your analyses and must clearly specify what elements of your analysis led to your conclusions.The exhibits, tables and figures used in your analysis should be attached to your executive summary and carry an equal weight in determining your grade for the assignment. ASSIGNMENT QUESTIONS 1 . What are the defining business and economic characteristics of the video game console industry? What is the industry like? 2 want Is competition Like In ten peeve game console Industry:' DOD Twelve-Tortes analysis to support your answer. Which of the five competitive forces is strongest? Which is weakest?Would you characterize the overall strength of competition in video game consoles as fierce, strong, moderate to normal or weak? Why? Crafting & Executing Strategy 17th Edition 3. What forces are driving changes in the video game console industry? Are these driving forces acting to make the industry more or less competitively intense? Are the driving forces acting to make the industry more or less profitable in future years? 4. What 3-5 key factors determine the success of video game console developers like Nintendo? 5. What is Nineteen's strategy?Which of the five generic strategies discussed in Chapter 5 is Nintendo using? What are some of the recent offensive and/or defensive strategies that Nintendo has employed? Have these tactics been successful? 6. Is it fair to characterize Nineteen's introduction of the WI as a blue ocean strategy? Why or why not? 7. How well is Nineteen's strategy working in terms of the financial performance it is delivering? Should shareholders be pleased? Why or why not? What 2-3 weaknesses do you see in Nineteen's financial performance? 8. What does a SOOT analysis reveal about the attractiveness of Nineteen's overall taxation? Is the company's competitive position as solid as top management seems to believe? Does the company have a competitive advantage? If so, what is the basis for this competitive advantage and is the advantage sustainable? 9. What does a competitive strength assessment (as per the methodology in Table 4. 4 of Chapter 4) reveal about whether Nintendo has a competitive advantage? 10. What recommendations would you make to Nintendo to improve its competitiveness in the video game console industry and to maintain its favorable positioning visit-Г-visit Microsoft and Sony?TEACHING OUTLINE AND ANALYSIS . What are the defining business and economic characteristics of the video game console industry? What is the industry like? Students should be able to identify the following business and economic characteristics of the console segment of the video game industry: v Economies AT scale: competitors In ten Industry are large Ana conclave cost advantages by producing large quantities. However, both Sony and Microsoft have traditionally operated at a loss in part due to heavy investments into research and development. Product innovation: Competitors win market share from rivals by developing arduous that are technologically superior and more powerful than the products offered by rivals. New products often contain technological breakthroughs such as advanced graphics or interactive motion-sensitive controllers as the basis for competition. V Degree of product differentiation: Products in the market are becoming increasingly more differentiated. Some products offer high definition graphics and play DVD's while others offer controllers with motion sensors to fundamentally change the way gamers play and interact with the game. Scope of competitive rivalry: Competition occurs on a global scale to help bread research and development costs while driving revenues. For the largest competitors, non-American sales account for the majority of worldwide sales with the exception of Oxb ow v Segmentation: The industry was segmented into console hardware, console software, handheld hardware, handheld software, PC software, online games, interactive TV, and mobile phone games. V Market size: The total size of the global video game industry exceeded 69 million units sold in 2008. 19 720 Students should further identify the following as important attributes of the industry: v Entry/Exit barriers. Barriers to entry were all but insurmountable. Successful new entrants were required to have sufficient capital and technological capabilities to develop sophisticated game hardware systems capable of performing highly complex calculations. Other barriers to entry included the establishment of an installed base of sufficient size to provide an adequate incentive for independent software developers to create games for a new game system. Scope of rivalry. Rivalry in the industry could be considered global, with the three largest sellers of game systems competing against each oth er in all world arrest. Competition exists on the basis of technologically-advanced and unique v scale economies. Economies AT scale were necessary to Keep game system Ana component development expenses at acceptable per unit levels. Next generation game system and component development costs were so high that analysts believed Sony and Microsoft consistently operated at a loss. V Consumer characteristics.While typical gamers could be thought to have demographic characteristics of being young and male, a new trend is emerging whereby traditional non-gamers are now potential consumers. This has expanded nonuser characteristics to include a wider array of ages along with male and female consumers. 2. What is competition like in the video game console industry? Do a five-forces analysis to support your answer. Which of the five competitive forces is strongest? Which is weakest? Would you characterize the overall strength of competition in video game consoles as fierce, strong, moderate to normal or weak? Why?Substitutes for Video Game Systems Competitive pressures coming from the market attempts of outsiders to win buyers over to their product s Suppliers of Raw Materials and other inputs used in the Manufacturing f Video Game Consoles Competitive pressures stemming from supplier-seller collaboration and bargaining Rivalry among Competing Video Game System Sellers Competitive pressures created by the Jockeying of rival sellers for better market position and competitive advantage seller-buyer collaboration and bargaining Buyers of Video Game Systems Competitive pressures coming from the threat of entry of new rival s Potential New Entrants Into ten Vivo Game console Industry v The bargaining power and leverage of buyers – a weak competitive force Big box electronics store and discount store buyers had relatively little leverage in estimations with sellers of video game consoles. Consumers expected retailers to carry the three leading brands of consoles and the top two brands of handheld games.A decision by retailers not to carry the leading brands of game consoles would negatively impact the retailer's image with consumers. Students may suspect that manufacturers had uniform pricing for retailers, regardless of size, because of the standardized retail prices of game consoles. V The bargaining power and leverage of suppliers – a moderately strong competitive force Students will easily conclude that suppliers of microprocessors and graphics recessing units (Spins) had a moderate degree of leverage with console manufacturers because of the collaborative development process utilized in the industry. Console makers were unable to negotiate between sellers of core components, since microprocessors and Spins were specifically designed for a system.Students can rightfully argue that video game console producers did have the ability to negotiate terms with components manufacturers prior to the development of a next generation system. V Competition from substitutes – a moderately strong competitive force There were many recreation and entertainment substitutes to video games. Video gamers could engage in outdoor sports or other activities or find entertainment indoors by watching television, reading, listening to music, surfing the Internet, playing board games, or playing a musical instrument. However, the interactive nature of video games was very intriguing for many young people and older gamers. Students should point out that other gaming platforms such as PC games, handheld games and mobile phone games were also substitutes for console-based video games. Threat of entry – a weak competitive force Entry barriers that include considerable console development costs, advanced genealogical skills, a sizeable installed base of game consoles, game software development costs, volume guarantees to suppliers of key components and access to retailers make the threat of entry weak. The most likely new entran ts would be established computer technology companies such as Apple. V Rivalry among competing video game console producers – a fierce competitive Torte Students should conclude that rivalry among competing sellers is fierce. Competition between Nintendo, Sony and Microsoft centers primarily on the technological capabilities of the consoles and having a wide variety of appealing name titles developed either internally or through partnerships with independent game developers. The intensity of competition had driven console development and production costs to more than $800 per unit for the Plantation 3.A third competitive weapon utilized by console makers was aggressive pricing, which resulted in a loss of more than $300 per unit on every Plantation 3 sold. Microsoft's Oxbow 360 pricing was also believed to be below its production costs. Nintendo had chosen not to compete aggressively on technological capabilities when developing the WI and has earned refits on the sales of WI units. Overall Assessment: Students should conclude that the video game industry is only modestly attractive when looking at the console segment. The greatest percentage of industry profits seemed to generate from the sale of game software and peripherals. Students may compare the video game business to the razor/razor blade industry, whereby razors are sold at a loss or breakable and blades carry high margins.The development of a large installed base of console systems is essential to earning substantial profits from the sale of game software over the lifespan of a console. Therefore, students should recognize that the video game industry requires patience on the part of participants to see profits from their investments in next generation technology. 721 722 3. What forces are driving changes in the video game console industry? Are these driving forces acting to make the industry more or less competitively intense? Are the driving forces acting to make the industry more or less p rofitable in future years? Driving forces that students should be able to identify include: v Product innovation.Students should note that since the beginning of the died game industry, each new generation of video game consoles has been dramatically more technologically advanced than prior generations. Technological advancements have included better graphics (I. E. , high definition) and motion sensor controllers. V Emergence of new video game devices. Students will comment on the emergence of new video game devices such as mobile phones, ‘Pods, and other handheld devices. V Emergence AT Internet-Dates peeve games. Beginning wilt ten Good Ana Play 2, game consoles were capable of connecting to the Internet to play Internet-based game software or multilayer games. Societal trends.Changes in societal trends influence the disposable income of consumers to buy consoles. The industry is said to be resilient to recession. Changes in demographic groups present an opportunity in unta pped market segments. V Changing consumers. There has been a change in the target audience for video game console industry competitors with the introduction of Nineteen's WI. Incumbents are likely to take note of this new segment. Students should conclude that the individual and collective effect of industry driving forces will drive development costs higher-?making the industry less attractive for new entrants and increasing the number of unit sales necessary for current console makers to achieve breakable.Students could make the argument that, as development and production costs continue to climb, consoles must evolve into central entertainment hubs that all consumers would like to have in their homes to achieve sales volumes necessary to support profitability. In addition, students may suggest that the cost of developing handheld systems will likely rise as features are added to defend against game features included on wireless telephones and pod- type devices. 4. What 3-5 key fa ctors determine the success of video game console developers like Nintendo? Students should identify several factors that are necessary for competitive success in the console segment of the video game industry to include the following: v Large installed base.Students should be able to argue successfully that the development of a large installed base is the most important factor related to success in the console segment of the video game industry. A limited selection of game titles reduced consumer interest in the console-?regardless of its technological capabilities. V Technological capabilities. Video game console makers were required to develop next generation consoles that could fully exploit the capabilities of the latest microprocessors and Spins. Traditional gamers seemed most interested in games with realistic graphics. Nineteen's WI did not have the graphics rendering capabilities of the Plantation 3 or Oxbow 360, but did include a highly innovative and technologically advan ced wireless game controller. Partnerships with independent software developers. The availability of intriguing game titles was essential to building an installed base and earning residual pronto Trot game sales. Strategy 17th Edition rattling & Executing v Acceptable development and production costs. Development costs and production costs increased as each new generation of game console became more technologically advanced. The cost to develop microprocessors and Spins capable of performing increasingly complex instruction sets and the cost of innovative components such as Sonny's Blue-Ray HAD optical drive had caused the cost of each Plantation 3 unit to range from $805 to $840.The Plantation g's retail price caused Sony to lose as much as $305 per unit, which increased the volume of game software that must be sold to make the business unit profitable. Access to distribution. Students should determine without much difficulty that access to retail distribution through big box elect ronics stores and large discount stores such as Wall-Mart and Target are essential to building an installed base. Chapter 5 is Nintendo using? What are some of the recent offensive and/ or Students should identify a firm's competitive strategy as being concerned with the specific game plan management uses to compete successfully and to secure a competitive advantage over its rivals.This requires that a firm out-compete its rivals by doing a better Job of satisfying buyer needs and preferences. Companies can employ one of five generic strategies or some combination thereof to beat its rivals. Those generic strategies include the following: overall low-cost provider strategy, broad differentiation strategy, focused low-cost strategy, focused differentiation strategy and best-cost provider strategy. Students may find that Nintendo is using a broad differentiation strategy, which involves competing by being unique in ways that are valuable to a wide range of customers. Nineteen's WI uti lizes a game controller that is highly interactive by incorporating motion sensors.As such, Nintendo has successfully built a competitive advantage by incorporating features that enhance buyer satisfaction in uneconomic or intangible ways, which is one of the four ways to build a competitive advantage with a broad differentiation strategy. Nineteen's broad differentiation strategic approach has been successful since technological breakthroughs are a critical success factor in the industry. Additionally, Nineteen's recent offensive and defensive strategies have helped the company successfully implement its strategy. A core element of Nineteen's offensive strategy involved changing the market's perception of WI by offering a very different gaming

Wednesday, August 28, 2019

Hedgehog concept Essay Example | Topics and Well Written Essays - 500 words

Hedgehog concept - Essay Example ls (family) conducted their business, the respect that they commanded, and the success that came with their leadership provoked my clandestine passion of becoming a leader at whatever capacity of the society. Later I came to discover from whatever small capacity I was given opportunity to lead, I did it diligently to an extent that I realized the passion was beyond inspiration and just in-born. The desire to excel drives my economy. I want to be in a better economic position than I hold at present through better leadership. The success that come with or will come with my good leadership skills is what drives my economical wheel. I am superb in penmanship. My genetic ability is more on authorship. I have more of article writing that have received a lot of acclaims and positive comment. I have not yet fully exploited my inborn ability. In the event that I focus on it and put more energy on it, I have a feeling that it can be best done and in a profitable way. I have focused more on my passion and given a blind eye to my in born ability that came naturally without the influence of people that surround me. Ironically, I have a feeling that Im not that good in writing since the positive comments after penning a piece of article from my friends they are the one that make me feel that I good in the world of authorship. My first step in moving forward is doing something that relates to my passion and skills. I should stop being persuaded by my passion and instead put more effort in making what I can do best in great heights. This particular step can be achieved in a number of ways. The only thing to be known on this scenario is to focus from being good to greatness. One way to achieve this goal is to concentrate on field of penmanship since it where I am good already (Collins 25). I should put concerted effort to improve my strength other than to concentrate on my passion where I am likely to be weak. It will take more energies to better my weakness where my passion is

Tuesday, August 27, 2019

Post the Mission Statement of the organization that you work for and Essay

Post the Mission Statement of the organization that you work for and give us some indication as to how that impacts the functional tactics of your organization. Discuss in 250 words - Essay Example 2009). Wal-Mart serves as a retail store that focuses on giving everyone a chance to access essential goods that they demand. Through its low prices, it focuses on providing a chance to the poor to access the same products and goods as the rich. Wal-Mart focuses on low product differentiation and conducts minimum advertisements. At the core of its operations, it targets average customers. It aims at giving the most value for its customers but keeps its prices to the minimum that ensures the average can afford the products. The management adopts the business-level strategies that involve locating stores at remote locations outside major cities. By locating the stores in small cities, Wal-Mart aims at serving average consumers (Hill & Jones, 2008, p. 113). Further, the management has robust programs to improve the working environment for its employees. Wal-Mart’s success emanates from its mission statement that targets to serve average customers through offering lower prices to improv e their lives. Wal-Mart Stores, Inc. (2009). WALMART 2008 ANNUAL REPORT. Retrieved January 29, 2015 from

Monday, August 26, 2019

Indifference curve analysis Essay Example | Topics and Well Written Essays - 1500 words

Indifference curve analysis - Essay Example The indifference curve is a particular selection of such combinations of goods, from out of the plot area, and all combinations on an indifference curve represent the fact that the consumer derives the same amount of total utility from consumption. Since utility derived from variously combined two goods on an indifference curve is same; the consumer is said to be indifferent between various combinations of two goods and the curve carrying all such combinations is termed as the indifference curve. Normally, with desirable goods on both axes (say, apples and oranges) the curve has a certain shape, further from the origin when both quantities are positive than when one is zero. (Definition,2006)Convexity to the origin of the indifference curves is explained by the fact that as one consumes more of one good its overall utility diminishes and tendencies to replace it with other increase. An example could illustrate this construct: It has been stated above that indifference curve carries mostly hypothetical pairs of goods combination ,amongst which the consumer is indifferent.However,the consumer cannot purchase quite a few of these combinations due to two factors. One is the prices of the two goods and the other is his income or budget available for expenditure on these two goods. Budget is an unalterable constraint while prices can be taken care of by moving from one good to the other. Continuing with the example above, suppose each apple was priced at $2 and each orange at $2.5 and given the fact that the consumer had an unalterable budget allocated for purchasing these two goods at $ 50 we observe that the consumer could either purchase 25 apples and no oranges or 20 oranges and no apples in two situations of exhausting the entire budget. However in neither of these situations the consumer maximizes his utility as he is away from his indifference curve despite exhausting his budget. In fact these two point s represent the two extremes of the budget line and lie on the horizontal and vertical axes respectively. In the figure below the line formed by joining the points (0, 20) and (25, 0) is the budget line. Budget line forms a triangular area with the two axes. This triangular area is the area of feasible purchases. The budget line, and everything inside it, is called the "feasible set" or the "consumption opportunity set."(Modern,2006).All combinations of apples and oranges plotted in this triangular area can be purchased from out of the given budget. This area is depicted by red lines. All goods combinations falling out of this triangular area cannot be purchased as they would not fit in with the budget constraint. This are is depicted with blue lines.Thus budget line narrows down the choice available to the consumer. In case the consumer increases his budget for the two goods across the board (say consequent to

Global Business Strategy Essay Example | Topics and Well Written Essays - 3000 words

Global Business Strategy - Essay Example The study also includes the opportunities and the threats that might be faced by the organisation due to globalisation. It also highlights the varied assets and resources of the organisation, which has helped in expanding its functions in global markets. The study also aims to spotlight the numerous issues of Valero Energy Corporation that must be condensed in order to prosper and to sustain in future among other competing organisations. Table of Contents Executive Summary 2 History and Internationalisation Process of Valero Energy Corporation 4 Current External Environmental Conditions 5 Industrial Conditions 6 Resource Audit of Valero 7 Identification and Evaluation of Firm’s Current Strategy 8 Analysis of Strategic Approach of Valero to the Global Management of Its Operations 11 Issues Faced By Valero 12 Recommendations 13 References 16 Bibliography 20 History and Internationalisation Process of Valero Energy Corporation Valero Energy Corporation is one of the largest and autonomous oil refineries of the United States. It is a reputed brand and a Fortune 500 corporation with its head office located in San Antonio of Texas. Valero is the producer and the marketer of transportation oil along with varied other petrochemical products. Along with refining, it also functions in two other segments namely retail and ethanol. In addition, Valero encompasses and functions with the help of 15 refineries throughout the entire United States, Canada, Caribbean and the United Kingdom. With the help of the acquisition of the refineries, Valero produces a capacity of approximately 2.9 million barrels of oil every day. The prime objective of Valero is to offer its customers clean and sterile fuel and other petroleum products in order to enhance its consistency and dependency (Valero Marketing and Supply Company, 2012). Valero is a leading manufacturer of ethanol as well encompassing of a capability of generating about 1.1 billion gallons each year in order to satisfy the needs and the demands of its consumers. Valero is one of the foremost retail operators of refined fuels and markets its products on a wholesale basis throughout bulk and rack networks along with 6,800 retail outlets named as Valero, Beacon and Diamond Shamrock. The organisation was established in the year 1980, with the aim to diversify itself into an international conglomerate operating in oil and gasoline sector. Valero started its operation as a spinoff of a Coastal States Gas Corporation and is also recognised as the global manufacturer and dealer of transport fuel (Valero Marketing and Supply Company, 2012). Valero is a vehicle of advancement which always attempts to enhance its business operations throughout the world with the help of varied mergers and acquisitions such as Basis Petroleum Inc, Ultramar Diamond Shamrock and Pembroke Refinery. This strategy has mainly resulted due to the increasing demand of fuel in the eme rging markets across the globe in order to develop and update their economies along with sluggish growth rate of oil in the markets of the United States. It is one of the most imperative strategic challenges faced by Valero, which forced the management to undertake this tactical step. It proved quite beneficial for Valero to augment its international functioning resulting in improvement of its net revenue to US$ 125,987 million in the year 2011 as compared to US$ 82,233 million in 2010 (Valero Marketing and Supply Company, 2012). Moreover, the operating income also enhanced to US$ 31, 293 in 2011 from US$ 20, 561 in the year 2010 (Valero Energy Corporation, 2010). Thus, international business dealings helped Valero to augment its brand identity and equity among other competitors of this segment in the global market. The objective of the

Sunday, August 25, 2019

Performance and Guanxi Effect on Job Security to Reduce Turnover Research Proposal

Performance and Guanxi Effect on Job Security to Reduce Turnover Intention in Saudi Arabia - Research Proposal Example This essay stresses that information collector predisposition can be minimized by having one analyst distributing the questionnaires and through institutionalizing conditions like guaranteeing unprejudiced individual characteristics to all respondents such as kind disposition and protection. The psychological and physical surroundings where information will be gathered ought to be made private, ensure privacy and general physical solace. Subjects will be asked not to reveal their names or any ID on the polls to ensure secrecy. Extra data for utilization in the exploration will be acquired from semi-organized interviews of the selected population of workers and administrators from the chosen firms in the chosen commercial ventures. This paper makes a conclusion that to attain the legitimacy of the data collected, the inquiries pollswill be inferred from data accumulated in the writing audit to guarantee they are right illustrative of what private associations thought about with respect to representative turnover. The quantity of respondents who declined to take part after being approached will be accounted for to empower a judgment of dangers to outside legitimacy. When conducting research in a certain society, ethical consideration is very important because the research has to assure the respondents privacy and trust. Saudi Arabian people to a high degree respect their cultural heritages, with the largest population being Muslims and the state has a monarchical government.

Saturday, August 24, 2019

Critically discuss with reference to the car industry e.g (Toyota), Essay

Critically discuss with reference to the car industry e.g (Toyota), the Japanese Lean production revolution - Essay Example The meiji restoration transformed the Japanese empire into an industrial world power. With new found pride in their country, and their culture, the Japanese flexed their muscles overseas. After the first Sino-Japanese War (1894-95), and the Russo-Japanese War (1904-05), Japan conquered a part of China, some parts of Russia, Taiwan and Korea. These territorial conquests provided Japan with valuable raw material and cheap labor for industries back home. In turn , these occupied territories were fertile markets for Japanese products. The relentless hunger for territorial expansion found expression in Japan's annexing of Manchuria in 1931. In 1937, Japan occupied more territories in China by waging a war on that country for the second time ( Second Sino-Japanese War, 1937-45). All these aggressive expansionist plans brought Japan in direct conflict with the U.S and its allies. Japan joined the Axis powers- Germany and Italy, in 1941.The same year, Japan declared war on the U.S. The war with Japan ended after the atomic bombing of Hiroshima and Nagasaki in 1945. Between 1945 and 1952, post-war Japan was administered by the U.S government .To help Japan stand on its own feet, American financial and technical aid were provided to Japanese business and industry. As part of the technical assistance , the U.S government brought in industrial and managerial experts from the U.S, to train Japanese companies on modern management and production methods. One of the most definitive techniques that influenced Japanese manufacturing, and made Japan the powerhouse that it is today , was the 'Training Within Industry', concept. Training within industry (TWI) service, was a creation of the U.S Department of War, to meet wartime needs. During war, manpower was required by the armed forces to fight the enemy. At the same time, industry which provided key material and equipment to the defense forces, faced a shortage of hands to finish production. Therefore, to optimize the productivity of U.S workforce, a program for training supervisors and workers in indus trial establishments was devised. The training was to be done by experts drawn from universities and businesses. The aim of this program was to improve productivity and quality. The basic concept of the training consisted of the following sequences: a. study and understand the process b. break up the process into its sub-components. c. Educate the supervisor and the worker on the process and its sub-components. d. Train the supervisor and the worker to work efficiently and without wastage. e. Train the worker to evaluate the end result and suggest corrective steps. f. Training the supervisor to deal with workers effectively and fairly. g. Training managements to develop newer and better training programs. The essential elements of the TWI program were similar to the principles laid down by Frederick w.Taylor (1856-1915), father of scientific management. In his book, 'The principles of scientific management (1911), Taylor proposed the following: a. replace rule of thumb work methods with methods based on scientific study of the task. b. Scientifically select , train, and develop each employee rather than passively leaving them to train themselves. c. Divide work equally between managers and workers so that

Friday, August 23, 2019

The Relation between Appearance and Reality Term Paper - 1

The Relation between Appearance and Reality - Term Paper Example Appearance may refer to something that simple seems to be and the reality is what the object actually is. These two aspects are normative and positive respectively and a number of philosophers like Locke, Berkeley and Descartes have written about them as their main areas of focus in order to decipher what is appearing to be and what actually is in reality. This paper helps to provide an insight into the realms of appearance and reality with respect to the works of the above mentioned philosophers and how their theories and ideas have actually compelled the world today to think in a certain manner. The main reason behind which one actually began to understand a demarcation between appearance and reality is misleading situations in everyday life. Human beings have a vast imagination which can take them to places; however this same imagination leads them into thinking things that may not actually exist. For example, emotions like fear and terror are created in the minds of people. Fear of the dark or the unknown is something that most people possess as a natural instinct however, the fact of the matter remains that the fear is simply something that appears to be and does not exist in reality because the fear has been planted by someone or something else most of the time rather than arising out of some situation. There are times when people assume things and circumstances and end up realising that whatever happened did not actually take place, but it simply seemed to be a certain way. Reality is that aspect of life that people are actually living in the present. It is not easy for a man to be living in the reality without getting ideas about what to do next. Thus, arises the aspect of ideas and perception of the near future which gives rise to appearances. Appearance is something that seems to be or something that a person might think the actual situation consists of, but

Thursday, August 22, 2019

Beowulf Paper Essay Example for Free

Beowulf Paper Essay â€Å"Time and again, foul things attacked me, lurking and stalking, but I lashed out, gave as good as I got with my sword. My flesh was not for feasting on, there would be no monsters gnawing and gloating over their banquet at the bottom of the sea. Instead, in the morning, mangled and sleeping the sleep of the sword, lay slopped and floated like the ocean’s leavings. From now on sailors would be safe, the deep-sea raids were over for good. Light came from the east, bright guarantee of God, and the waves went quiet; I could see headlands and buffeted cliffs. Often, for undaunted courage, fate spares the man it has not already marked. However it occurred, my sword had killed nine sea-monsters. Such night-dangers and hard ordeals I have never heard of nor of a man more desolate in surging waves. But worn out as I was, I survived, came through with my life. The ocean lifted and laid me ashore, I landed safe on the coast of Finland.† * Seamus Heaney, Beowulf: A new Translation, Lines 559-581 The epic poem, Beowulf, is an old classic hero tale. The author tells throughout the poem how Beowulf is an archetypal hero through different characteristics, good and bad combined. He usually portrays health, skill, consideration, honor, loyalty, respect and the quality of a protagonist, and then at times he also is an antagonist. He sticks to what the king asked him to do, and fought off Grendel, then he stayed around to fight off Grendel’s mother and the dragon to keep the town out of danger and terror, showing loyalty, honor, skill, respect, and health. But he was an antagonist when he taunted Grendel to get him to battle him. (Lines 301-709) He also showed consideration when he fought off Grendel’s mother after she wanted vengeance for Grendel (Lines 710-1007), and when he fought off the dragon (2211-2512). In the particular passage above Beowulf is perceived as Healthy, Skillful and Educated. He comes off as healthy because he says that he fought monsters time and time again, which requires a healthy system to uphold against the constant fighting. He comes off as Skillful because he said that no monsters were gloating over him at the bottom of the sea, instead he was lying on top of the sea, still living and then landed on shore. He also is skillful because he killed nine sea-monsters and protected the sailors from all of the sea monsters that they were once terrorized and killed by. Then Beowulf comes off as Educated because of all the sailors and men that passed through that part of the sea, he was the only one that had the education and skill to kill off the monsters that were dangerous and a hard ordeal. And it’s not only in this passage that the author shows that Beowulf is healthy, it’s all the way up until the very last battle where his health pretty much crashes and burns because he can’t withstand the wound. But even with all the good, Beowulf is also bad, he doesn’t have the best moral quality, being in a Christian poem. He boasts about how he killed Grendel, and still takes money from the people in the town even when they don’t have the most money in the world (Lines 1925-2210). Any person with any moral uphold wouldn’t accept the money, gold and horses from the town people and he wouldn’t boast about killing someone, he would boast that he protected the town from danger. The author successfully proved Beowulf to be the great hero he was said to be through his depiction of Beowulf as the skillful, educated epic hero and the way he told the story. Works Cited: Heaney, S. (n.d.). Beowulf: The New Translation.

Wednesday, August 21, 2019

Search for my Tongue Essay Example for Free

Search for my Tongue Essay Sujata Bhatt tells us about the difficulties that she has speaking with a new tongue when her old tongue starts to rot away in her mouth with her new tongue pushing it out of the way and trying to take over. Your mother tongue would rot, rot and die in your mouth until you had to spit it out. This means the author has stucked between two languages and the new language (English) is making her lose mother tongue (Gujarati). Having two tongues this poet feels that she is totally confused and makes her to forget her mother tongue while she speaks English. She also tried to think and dream both languages at the same time but she couldnt. She has dreamt in Gujarati and transliterated into English. At the end of the poem her feelings changes a bit because she describes over the night her confidence grows back even stronger than before, but while she dreams it grows back, stump of a shoot grows longer, grows moist, grows strong veins, it tries the other tongue in knots. This means she highlights the difficulties being part of two cultures. The dominant culture is always the mother tongue (her Gujarati culture is always the influences of the American lifestyle). The shape of the poem has divided into three parts:  ? First part of the poem explains her conflict with loosing her  mother tongue and learning a new foreign tongue.  ? Second part of the poem is written in Gujarati (mother tongue) and explains her fear of loosing her identity.  ? Third part of the poem is translated in to English and focuses on her determination to retain her Gujarati culture. The poet includes the Gujarati as an indication of the strong link between language and culture. This shows us that she tries to use the both languages at the same time in her dreams. The central part of the poem is looks different because it has written in Gujarati and transliterated into English. I think the poet included this Gujerati script and its phonetic prescutation underneath as an indication of the strong link between language and culture and possible to you to realise how difficult it would be in a foreign country and speaking in a foreign language.  Fundamentally, one image links this whole poem is that a flower. She compares her mother tongue to a flower that grows (a symbol of beauty and life), like a flower grows the foreign language also grows but her mother tongue is stronger eventually. This is called Extended Metaphor. I think the poet used this extended metaphor in order to compare the differences and influences of two languages. The list below describes some of the ways in which her mother tongue is compared to a plant. The poet uses both negative and positive images in describing her mother tongue. Sujata Bahatt thinks that foreign tongue has most powerful effect than Gujarati but Gujarati culture overcomes the influences of the American style and still makes the mother tongue strong.  In conclusion, I believe that I have learnt a lot about the culture and traditions of an immigrant. The writer feels that she has confused in between two languages. She feels her mother tongue is being lost in her mouth and foreign tongue is becoming more frequently used, this is making her uncomfortable. At the end of the poem, I feel that she gives us an inside view of what it must feel like to be in a foreign country and speaking in a foreign language.

Tuesday, August 20, 2019

The importance of the Stock Market to the Economy

The importance of the Stock Market to the Economy The stock market is an important part of the economy because it organizes the resources and channels them to useful investments, in order to perform this role it must have proper association with the economy. Capital markets are important elements of a modern market based economic system as they give out the channel for flow of long term financial resources from the savers to the borrowers of capital. Stock prices of oil sector have been considered for the study. It is the most growing and important part of the stock exchange because its prices are deeply affected by the macroeconomic variables. Investor of Pakistan who invests for long term and short run will get benefit from this research. According to most of the past studies which conducted on this topic shows that macroeconomic variables which include interest rate and exchange rate directly impact on the stock prices if any changes take place in them it will directly going to affect the stock price. Macro economic variables are inversely related to the stock price. In markets, investors provide long term funds in exchange for long term financial assets offered by borrowers. The stock exchange is an important part of any country in the sense that it organizes domestic resources and channels them to productive investments. However, for this purpose it must have important association with the economy. Capital markets are main elements of a modern market based economic system as they serve as the channel for flow of long term financial resources from the savers of capital to the borrowers of capital. Efficient capital markets are essential for economic growth. With increasing globalization of economies, the worldwide capital markets are also becoming increasingly incorporated, while such integration is constructive for global economic growth. Hussainey Ngoc (2009) examined the effect of macroeconomic variables on Ghana Stock Exchange. They found that macroeconomic indicators such as lending rates and the inflation rate affect on stock market performance. Their results suggested that macroeconomic indicators should be considered for investors in developing economies. This motivates us to examine the degree to which this conclusion is applicable to another emerging stock market in Viet Nam. Huberman Zhenyu (2005) investigated that market embrace both the new issues by primary market and secondary market. Such securities might be raised in an organized market such as the Stock Exchange. As a marketplace where securities include stocks, bonds and shares are bought and sold openly with relative ease, the stock exchange is very important to the investors. The existence of a stock exchange in a capital market helps to broaden the share possession of a company and evenly distribute the nations wealth by making it possible for people in different locations to own shares in a firm in another location by purchasing the shares, bond stock through the simple mechanism of the stock market. Kandir (2008) investigated the linkage between stock price and macro economic variables for some developing countries in Eastern Asia which examined the impact of macroeconomic risks on the equity market of the Philippines stock exchange. Findings show that financial fluctuations in exchange rate and political changes on owners of Philippine equities cannot be able to explain Philippine stock returns. Mohammad et el. (2009) investigated that Karachi stock exchange is largest and most active stock market in Pakistan, accounting for 65% to 70 % of the value of the country total stock transaction as on October 1, 2004, 663 companies were listed with market capitalization $23.23 billion having listed capital of us $ 6.59 billion. Pakistans industrial exports and foreign investment today are growing at the countrys fastest rate ever. The countrys foreign exchange reserves skyrocketed to $12327.9 million in 2003-04 from $2279.2 million in 1998-99. Similarly, several Pakistani stocks are now traded on international markets. Also, foreign brokerage houses are now being allowed through joint ventures with Pakistani investment bankers to participate in primary as well as secondary markets in Pakistan. The stock exchange is not only crucial but also central to the entire mobilization process. This is because it offers an opportunity for continuous trading in securities. The purpose is to examine the impact of the macroeconomic variables which includes interest and exchange rates on the stock prices of oil sector by using regression analysis. Seven year data of both dependent variable and independent variables has been used to determine the result. The macroeconomic variables would provide more information about the stock return economic activity relationship. This would also consider other firm characteristics in order to obtain a better insight about the return generation process. Chapter two consists of literature review where as chapter three consist of methodology and chapter four consists of results interpretations. Chapter 2: Literature review Roll Ross (1980) suggested that the arbitrage pricing theory is suitable alternative because it agrees perfectly on what appears to be perception behind a capital asset pricing model. The arbitrage pricing theory is based on the linear return and generating process at the first principle which require no utility assumption. It is also not restricted to a single period. This theory is applicable in both multi period and single period cases. Arbitrage pricing theory begins with an assumption based on return generating process. There are two major differences between arbitrage pricing theory and capital asset pricing theory, first arbitrage pricing theory allows more then one generating factors. Secondly arbitrage pricing theory demonstrates that any market equilibrium must be consistent with no arbitrage profit, every equilibrium will be characterized by a linear relationship between each asset expected return and its return response. After determination of the factors in asset returns other than the market returns, the arbitrage pricing theory was introduced in order to determine the association between variables used in the study, arbitrage pricing theory stated the use of variables without the need of pre specification of variables but it did not take too long before the criticisms to appear. One foremost criticism was that, the arbitrage pricing theory can not be able to specify the factors, but just explain them statistically. This inefficiency of the arbitrage pricing theory was established even in the first empirical arbitrage pricing theory study. Ammer (1993) investigated the empirical relation between macro economic variables and stock prices in ten different countries, with a main objective is to find a links between these vaiables. The stock price decomposition is used to find the ways through which negative stock prices is associated with a positive inflation which is because of decrease in dividends and increase in real equity returns. Ammer (1993) observed in the results that generally increase in the rate of macro economic variables are directly linked with decrease in dividends and also decrease in required real equity returns. This favors the corporate tax related theories in which any change in the tax related systems affect on an increase in the rate of inflation this helps the firm in raising their cost of their capital relative to the return that has been earned by investors in the firm. Ammer (1993) investigated that In the United States (US) and the United Kingdom (UK) the suggestion of the arbitrage pricing theory model with a conditionally heteroscedastic economic factor imply that macro economic variables may increased the average amount of capital. Kaul (1995) suggested the impact of changes in monetary policy regimes, expected macro economic variables and there impacts on stock prices. Post war evidence from four countries reveals a direct link between these relations and the central banks operating targets which include money supply, interest rates and exchange rates. Specifically, the post war opposing relation has been found between stock price and in expected macro economic variables are significantly stronger during interest rate regimes. Bonomo Garcia (1997) investigated a version of the conditional capital asset pricing model with respect to a local market portfolio, which provides alternative by the Brazilian stock index during the period of 1976 to 1992. For this purpose they had selected conditional arbitrage pricing theory model by using the difference between the 30 days rate and the overnight rate as a second factor in addition to the market portfolio in order to capture the large inflation risk present during this period. Bonomo Garcia (1997) used conditional capital asset pricing model and arbitrage pricing theory models in the formation of the portfolio that consist of twenty five securities exchanged on the Brazilian market, this played an important role for the appropriate pricing of the portfolios. It has been examined that the unconditional moments of the returns series for the stock market index taken from the IFC Emerging markets data base it shows an average return in US dollars of 21.15% and an average excess return in local currency of 28.82%. For industrialized countries standards, it has been observed that these returns are exceptionally high. According to the fundamental asset pricing models such as the capital asset pricing model or the arbitrage pricing theory considered that, high expected returns getting from the security are linked with high measures of risk with respect to a number of risk factors that are going to affect directly on market portfolio. According to the capital asset pricing model, the expected return getting from the portfolio of assets is because of the covariance of the portfolio return which consists of different securities from exchange with the market portfolio return. It has been examined that the while selecting any market portfolio Two different views can be undertaken: it has been considered that the Brazilian stock market is divided into different segmented and concentrate on local risk factors which consist of macroeconomic variables that help in explaining local returns of the stocks, another way to adopt the perspective is that, international investors diversifying their portfolios worldwide by investing their funds into different market of the world in order to reduce the risk from their portfolio because if they invest in only one market with out diversifying in this way risk is high because if decline in market takes place then the investor will directly going to suffer loss on investment. Bonomo Garcia (1997) examined that they adopt the view according to which the Brazilian stock market is divided and tests a description of the conditional capital asset pricing model with reverence to a limited market portfolio, characterized by the Brazilian stock index in the IFC database. The conditional capital asset pricing model is experienced on a set of size portfolios produced from a total of twenty five securities exchanged on the Brazilian markets. The IFC emerging markets data base of the World Bank provides data on stock prices and other financial variables for both the stock index and individual stocks in a series of rising and newly industrialized countries. The Brazilian stock market is subjected by individual investors. They characterize investors in the market with minute investment knowledge or experience, they speculate in the stock market in the absence of market experience. Stocks are often bought and sold on historical prices and on market news about stock pri ces, resulting in stock market mania. Bonomo Garcia (1997) selected the total list of twenty five common shares which were listed on the brazilian stock exchange from January 1976 to December 1992. To test this model the return on individual securities has been selected, but for the purpose of getting linear result they have limit the number of variables used in this process. Bonomo Garcia (1997) examined the degree of the covariance parameters which is small in absolute value but small in variance of the surplus return. The betas of the second and third portfolios got negative values in the outline followed by their magnitudes, which increases with the capitalization value. This specifies the hight of worth portfolios which offers the best hedge against the risk. It should be seen that, by adding a second factor, all the market portfolio betas becomes lower as compare with others portfolios. Altay (2003) suggested that a range of macroeconomic variables representing the essential indicators of an economy are employed in the factor analysis processes and factor realizations of principal economic phenomena. Arbitrage pricing theory has a serious disadvantage in defining systematic risk factors. In contrast to the Arbitrage pricing theory, the market portfolio as the only risk factor in the capital asset pricing model is clearly defined. Asset prices are supposed to respond to series of macro economic procedures. A number of macroeconomic alteration influence asset prices stronger than others and some do not even influence them at all. One of the most well known Arbitrage pricing theory tests on this topic was implemented and measured by a number of important economic variables to have systematic influence on asset returns. Arbitrage pricing theory is only focused on shaping the number of risk factors that systematically give details about the stock market returns by formulating factor analysis methods. Altay (2003) have chosen five factors from the New York stock exchange and AMEX which depend on the period length and the size of the stock groups under examination. In this paper they put into practice empirical analysis to both German and Turkish stock markets and economic data. It shows that Germany and Turkey are both European countries with dissimilar levels of economic development. There are numerous earlier empirical results of the Arbitrage pricing theory for the German and Turkish Stock Markets. Monthly returns of ninety three assets are put into practice, the principle components analysis method are used in order to test the arbitrage pricing theory. The accomplishment of the maximum likelihood factor analysis method used some macroeconomic variables as possible common risk factors in the analysis. The asset prices are supposed to respond to macroeconomic factors and unexpected changes in macroeconomic factors are estimated to be rewarded in stock markets. The factor structures of the German and Turkish economy are presented by employing the same eight macroeconomic variables and stock market proxies in the principle mechanism and maximum likelihood factor analysis. In each type of analysis of German variables, four factors are extracted while only three variables are derived from Turkish variables, representing dissimilar factor structures of these two economies. Javed Aziz (2005) suggested the development of financial equilibrium asset pricing models which has been the most significant area of research in current financial theory. These models are broadly tested for the development of the market. The strength of the Arbitrage Pricing Theory (APT) model on returns from twenty five aggressively trading stocks in Karachi Stock Exchange using monthly data from January 1997 to December 2003 had been examined. Arbitrage pricing theory suggested that there are a number of sources of risk in the economy that cannot be eliminated. Javed Aziz (2005) considered in relation to economy wide factors such as inflation, interest rate, exchange rate and changes in aggregate output. Instead of calculating a single beta, like the capital asset pricing model, arbitrage pricing theory calculates several betas by estimating the importance of an assets return to changes in each factor. The arbitrage pricing theory assumes that a security return is a linear function. The arbitrage pricing theory thus specified that the risk premium for an asset is connected to the risk premium for each factor and that as the assets sensitivity to each factor increases, its risk premium will increase as well. The arbitrage pricing theory forecast that the prices of all sensitive assets in the economy conformed to the condition of no arbitrage. No arbitrage indicate that an individual investment in a well diversified portfolio could not receive any additional return merely by altering the weights of the assets incorporated in the portfolio, holding equally systematic and unsystematic risk constant. The arbitrage pricing theory stated that there is a set of fundamental sources that influence all stocks returns. The stock return is a linear function of a certain number of economic factors, while these factors are unnoticeable and not significant. In order to test the arbitrage pricing theory empirically, following approach has been used. It can simultaneously calculate approximately the asset sensitivities and unknown factors by examining factor analysis on stock returns. On the other hand, it identifies previous general factors that give details about the pricing in the stock market. Such macroeconomic variables can be those affecting either future cash flows on companies or future risk adjusted discount rates. These selected twenty four stocks are the most active stocks with just about 80% weight of aggregate market capitalization of KSE 100 index companies. In order to analyze the stability of the factors in the arbitrage pricing theory, the period is divided into two sub periods; monthly data had been selected for the purpose of examination. The schedule are reported approximately daily by the news media. The outcome specify that in the whole sample period only two priced factors are found to have exploratory factor analysis approach; in the first sub period none of the factors seems to be priced, and in the second sub period they discovered only one priced factor at the 5% level of significance. The number of priced factors seems to be very low and the consequences of this approach specify considerable instability of the explanatory power of the arbitrage pricing theory. The results of two different testing methods for the arbitrage pricing theory are nearly in the identical manner because in the whole sample period two priced factors are found. This is an encouraging result, which supports the theory. But the number of priced factors seems to be very low and the results of this approach point towards substantial instability of the explanatory power of the arbitrage pricing theory. The arbitrage pricing theory is an alternative for the Capital Asset Pricing Model in this way both shows a relation between assets returns and their covariance with other variables, where as capital asset pricing model focuses on the covariance of the market portfolio return. Huberman and Zhenyu (2005) suggested that the arbitrage pricing theory entails a procedure to identify at least some features of the underlying factor structure. Merely stating that some collection of portfolios (or even a single portfolio) is mean variance efficient relative to the mean variance frontier spanned by the existing assets does not constitute a test of the arbitrage pricing theory, because one can always find a mean variance efficient portfolio. Consequently, as a test of the arbitrage pricing theory it is not sufficient to merely show that a set of factor portfolios satisfies the relation between the return and its covariance with the factors portfolios. Gunsel Cukur (2007) investigated the performance of the Arbitrage Pricing Theory (APT) in London Stock Exchange for the period of 1980-1993 as monthly. The arbitrage pricing theory introduced by CRR (1986) involves identifying the macroeconomic variables which directly impact on stock returns. Thus macroeconomic behavior influence the returns on stocks and utilizing macro variables in the return generating process provided a basis to approximate stock returns. The simplest of theories of pricing a financial asset is by discounting future cash flows. Hence, the following exogenous variables that affect the future cash flows or the risk adjusted discount rate of a company must be measured. The reason is to recognize the macroeconomic forces that influence the stock market. For this purpose seven economic variables are examined. The model is designed in a way to test the two conditions. These conditions are economic conditions such as term structure of interest rate, inflation, money supply, the exchange rate, the risk premium and industry specific conditions, dividend yield and industrial production. Their result suggests that share prices are affected in a different manner than one described in CRR. This can be explained by the idea that other explanatory variable may be at work in UK or the CRR methodology is inadequate. They suggested that, interest rate, inflation and money supply were among the factors that are found to be significant. However, in this case unexpected inflation seems to be insignificant. Humpe Macmillan (2007) investigated the relationship between stock market and a series of different macroeconomic and financial variables through out the stock markets over a range of different time horizons. Presented financial economic theory proposes a number of models that gives a structure for the examination of this association. Arbitrage pricing theory is a one way of linking macroeconomic variables to the stock market where different kind of risk factors which affect market in a different ways can give details about asset returns. The early investigations related to arbitrage pricing theory focused on individual security returns, where as it may also be used in a cumulative stock market structure. In this way any changes in a given macroeconomic variable could be seen as reflecting a change in an underlying systematic risk factor influencing future returns. Many of the past empirical studies based on arbitrage pricing theory that links the state of the macro economy to stock market returns, are characterized by modeling a short run relationship between macroeconomic variables and the stock price in terms of first differences, assuming trend stationary. These papers found a significant relationship between stock market prices and changes in macroeconomic variables. An alternative, but not inconsistent, approach is the discounted cash flow or present value model. This model relates the stock price to future expected cash flows and the future discount rate of these cash flows. All macroeconomic factors that influence future expected cash flows or the discount rate by which these cash flows are discounted should have an influence on the stock price. This showed that long term moving average of earnings predicts dividends and the ratio of this earnings variable to current stock price is powerful in predicting stock returns over several years. The only negative coefficients are found on long term interest rates. Additionally, it has been examined that Europe an stock markets are highly integrated with that of Germany and also found that industrial production, stock prices and short term rates in Germany positively influence returns on other European stock markets (namely France, Italy, Netherlands, Switzerland and the UK). Humpe Macmillan (2007) draw upon theory and existing empirical work as a motivation to select a number of macroeconomic variables that might expect to be strongly related to the real stock price. They make use of these variables, in a co integration model, to compare and contrast the stock markets in the US and Japan. The aim is to examined whether the same model can explain the US and Japanese stock market while yielding consistent factor loadings. This might be highly relevant to private investors, pension funds and governments, as many long term investors base their investment in equities on the assumption that corporate cash flows should grow in line with the economy, given either a constant or slowly moving discount rate. Unanticipated inflation may directly influence real stock prices negatively through unexpected changes in the price level. Inflation uncertainty may also affect the discount rate thus reducing the present value of future corporate cash flows. Tursoy, Gunsel, Rjoub (2008) suggested that the reason of this study is empirically test the arbitrage pricing theory in Istanbul Stock Exchange for the period of February 2001 up to September 2005 on monthly base. The arbitrage pricing theory is a theoretical substitute to the capital asset pricing model which analyzed the strength of the arbitrage pricing theory in the US securities market. They used US macroeconomic variables as proxies for the underlying risk factors driving stock returns. Tursoy, Gunsel, Rjoub (2008) found that several of these macroeconomic variables to be important in explaining expected stock return, particularly in industrial production, changes in risk premium, and twist in the yield curve. Tursoy, Gunsel, Rjoub (2008) analyzed the empirical applicability of the arbitrage pricing theory is to price the Istanbul Stock Market, and to recognize the set of macroeconomic variables which communicate more closely with the stock market factors. There are list macroeconomic variables which were used to price the stock of Istanbul Stock Exchange which formed in eleven portfolios from the industrial sector because it represents the important segment of the traded stocks which consists of 174 out of 259 totally traded stocks. A higher index is reflected in higher values of these three variables, therefore, this indicates greater pressure on the exchange market depending on the nature of the involvement of the respective Central Bank. That is, speculative pressures are either accommodated by a loss of reserves or can be prevented by the monetary authorities through an increase in interest rates. Each portfolio may influence different industry in different manner by macroeconomic variables; a macroeconomic factor may affect one industry positively, but affect the other industry negatively. The regression results specify that there is no significant pricing relation between the stock return and the tested macroeconomic variables. This indicates that other macroeconomic factors affect the stock return in Istanbul Stock exchange or the multifactor arbitrage pricing theory with macroeconomic variables fails to explain the effect in stock market. The consequence found that there is no relationship between the macroeconomic variables and stock market return. Kandir (2008) suggested that by make use of statistical tools like factor analysis, arbitrage pricing theory provide guidance in the use of variables without the requirement of pre specification of variables but it did not take too long before the criticism to appear. One most important criticism was that, arbitrage pricing theory cant be able to properly explain the variables which are used in the study, but just derive them statistically. This insufficiency of the arbitrage pricing theory continues that the result obtained from factor analysis should be primary economic variables, such as gross national product (GNP) or interest rates. Additionally, in this paper they acknowledge that stock prices and stock returns are systematically affected by economic variables. For this purpose they have selected the data from July 1997 to June 2005 in order to analyze there impact. Their findings of suggested that there has been a significant relation between Macroeconomic factors and stock return in the countries examined. Kandir (2008) recommended that the examination of major economic factors that are alternate of the derived factors in the arbitrage pricing theory which is the first to employ specific macroeconomic factors as proxies for undefined variables in the arbitrage pricing theory. Expected dividends of a company can be directly affected because of increase in inflation rate, real production, oil prices and consumption. The new model has an explicit advantage over the arbitrage pricing theory. There is no theoretical framework for the selection of macroeconomic variables. Stock prices are found to share positive long-run relationships with industrial production and consumer price index. Whereas result they obtained has found to have a negative relation with money supply, interest rate and exchange rate. Kazi (2009) identified the significant risk factors for the Australian stock market by applying co integration technique. It is Relevant to previously used variables, which act as a substitute for Australian systematic risk factors. The linear combination of previous variables is found co integrated although not all variables are significant. The bank interest rate, corporate profitability, dividend yield, industrial production and, to a lesser extent, global market influence are significant for the Australian stock market returns in the long-run; while the stock prices are used in each quarter by its own market presentation, interest rate and global stock market arrangements of previous quarter. The sensible implications for both local and overseas investors as all investors now able to direct their investment risks better while considering Australian stocks into their portfolios through monitoring only 4 to 5 factors that are identified here. The relationship between kibor rates and stock prices of oil sector from the viewpoint of asset portfolio allotment is commonly negative. An increase in interest rates raises the necessary rate of return, which in turn inversely affects the value of the asset. Measured as opportunity cost, the nominal interest rate affects investors decision on stock holdings. A rise in the opportunity cost may, however encourage investors to find a substitute shares for other assets. Using co integration technique this paper performs an empirical analysis to identify the significant risk factors for the Australian stock market. In doing so it examines whether or not the selected a previous variables can give details about the return generating and pricing process of the Australian stock market. The results are in conformity with the current finance theory, yet interestingly different on some points. In long run, it is found that the Australian stock market prices are being influenced by only 4 or 5 systematic risk factors. Nguyen (2010) examined the stock price performance of an emerging stock market the Stock Exchange of Thailand, by applying a new equilibrium stock price theory. They have chosen the data for assessment during financial crises. The theory recommend stock market risks and returns are determined by essentials under a linear relationship recognized on the basis of a consistent multi factor model return generating process and the assumptions of perfectly aggressive and frictionless markets. The literature on asset pricing models has taken on original lease of life since the appearance of the Arbitrage Pricing Theory, is substitute theory to the renowned Capital Asset Pricing Model (CAPM). Being motivating in its own right the arbitrage pricing theory soon concerned a number of main financial economists and researchers which had yielded its several affect on different researches. The methodology for testing the strength of the capital asset pricing model can be functional for testing the weight of arbitrage pricing theory. The two pass test measures are applied in almost every test of the arbitrage pricing theory. The study on certain macroeconomic forces which are going to systematically affect the stock returns of certain stocks. Their result suggests industrial production; changes in a default risk premium, term structure, and unanticipated inflati

The Bell Jar Essay -- essays research papers

The Bell Jar   Ã‚  Ã‚  Ã‚  Ã‚  The book starts with the setting in New York as the main character is pondering the execution of the Rosenbergs. Esther the main character is in New York because of contest held by a fashion magazine. While in New York Esther tells about her life by the encounters she's had. She is a college student and is in the honors courses. The whole trip to New York had messed up Esters way of thinking. For example before she went to New York she had planed to finish college and become a poet or English professor, but now she had no idea.   Ã‚  Ã‚  Ã‚  Ã‚  When Esther returned home she became very depressed. She wanted to disregard the whole New York experience by taking a exclusive summer writing course. Only the best of the best writers had been able to be excepted to this class and Esther was sure she had made it until her mother had told her she was not accepted. This was what pushed Esther over the edge. She became more and more obsessed about how she would kill herself and planed it out carefully. When the time came she just couldn't do it. So she began to preoccupied herself by thinking of other ways of death. She couldn't sleep or read this bothered her because she loved to read. Finally she went to see a doctor who gave her shock treatments. This made Esther even worse an so she slipped even deeper into her depressed state. She knew the bell jar was almost completely apon her and there was nothing she could do to prevent...

Monday, August 19, 2019

Smut, Erotic Reality/obscene Ideology Essay -- Murray Davis Human Sexu

Smut, Erotic Reality/Obscene Ideology In the book Smut, Erotic Reality/ Obscene Ideology , by Murray Davis (1983), the author expresses the idea that the best source for studying human sexuality objectively is "soft core", rather than â€Å"hard core† pornography. (Davis p. xix). The purpose of this paper is to critique Davis's claim and to study what understanding of human sexuality someone might have if they used some other resource that is available today, in this case the Internet. Davis argues that , "hard core pornography is usually more abstract and less explicit than soft-core pornography". (Davis, p. xix, 1983). Davis doesn't go on to explain how hard-core pornography can be less explicit than soft-core. However he does explain that hard-core pornography is more abstract in that, it depicts the sex act only and not the emotional or personal characteristics of the people involved in the act. (Davis, p. xx) He believes soft-core pornography is describing "a sexual experience", which conveys characteristics of the participants that are not described by hard-core pornography. Hard-core pornography describes "sexual behaviour" which involves more of the act of sex rather than the characteristics and feelings involved with sex. (Davis, p. xix) Although Davis admits that the vocabulary of sex is changing (Davis, p. xxv), he also states that hard-core pornography uses considerably more vulgar terms that are associated with lower-class activity, such as, "prick, fuck, and suck" (Davis, p. xxiii). Davis believes that hard-core pornography, induces imaginative behaviours by using these lower-class, four-letter words. The stories use phrases such as "First we sucked, then we fucked."(Davis p. xix, 1983), to allow the reader the tools to imagine the scene actually taking place. The reader is lead by the author through the story by using words that may be more understood or common in the readers' everyday life. He also accuses hard- core films of being "behavioristic" and "abstract" because they often fail to "fully inform the audience about the characters personality types and social categories."(Davis, p. xx, 1983) Soft-core pornography, on the other hand, often depicts "the subtle phenomenological effects that result when a character's sexual behaviour clashes with his or her personal and social characteristics." (Davis, p. x... ... to the search. For example, love plays a role in our sexuality. If someone did not know this they would enter â€Å"human sexuality† into the search engine and again may be distracted by flashy, hard-core, sex-sites and may not find anything on love. The over-all understanding of human sexuality would be limited according to which sites were looked at. Although I agree somewhat with Advisee's claim that soft-core , rather than hard-core pornography, may be a better resource for studying human sexuality. I feel that using only one resource for information can limit the view and even sometimes distort an individuals ideas of human sexuality. When using a resource such as the Internet, one may be overwhelmed with information and marketing tactics may win the attention of information seekers and take away from the sites that actually offer factual information regarding human sexuality. Therefore, I believe that an individual should use all resources available to them when studying any topic, especially a topic as complex as human sexuality. References Davis, S. Murray. (1983). Smut erotic reality obscene ideology. University of Chicago Press: Chicago.