Wednesday, July 31, 2019

Article Review Draft

I am reviewing literature that relates to my research topic of how the Information genealogy affects the employment rate in the logistics field. I have assembled 16 individual books, articles, and or sources that will support me in my research of my hypothesis. My goal of this review is to properly order and summarize the data have accumulated, and to determine areas in which further research and focus is required (Crewel, 2014). The first article that is going to be absolutely critical to my research is the Bureau of Labor Statistics (2015) Occupational employment statistics estimates.This isn't an article but more specifically a database of information collected by U. S. Overpayment organizations in order to calculate nationwide employment rates. The data is collected on a monthly basis by the U. S. Census Bureau and from a sampling Of sixty thousand households. The employment databases goes back to 1942, however for purposes of my study I will just use recent information from the last 10-15 years. While this data is subjective, it offers strong set of supporting historical employment trends in logistics career fields, and is the current standard in U. S. Employment information (Bureau of Labor Statistics, 2015). In a research article by (Anderson, German, & Scrum, 1 997), the authors look to provide empirical research into the impacts of downsizing or reducing the amount of employees on logistics performance. Two main conclusions are reached; companies that have reduced their logistics workforce have a perception of improved logistics performance but in reality they have the same indicators as firms that have not reduced their workforce. The other conclusion is the reduction of workforce contributes greatly to the lack of loyalty, decreased moral, and vastly increased stress levels.This article provides interesting argument that tit firms striving to reduce employee number totals; reduction equally increases negative aspects as well. The article (Jackson, 20 01 ) clarifies the beginning of the Internet age and details what and when it came into existence, as we know it today. The development of the Internet was a collaborative team research effort created in U. S. Governmental agencies around 1960. The Internet was not a viable option for civilian society until 1990. This article will allow me to accurately focus on the proper time frame of Internet availability to the logistics community.I will not rely solely on the information in this article and back up the findings of this article with the data in a similar Internet origin articles. Additionally this article makes clear that the many supporting technologies and infrastructure developments positively contributed to the advent of the Internet. In the article by (Laser, 2004), he explores the ways and methods that the Internet, computers, and software with communications affect logistics and specifically transportation. This study confirms the vast importance of technology in revoluti onize modern logistics areas.One important aspect of this particular article is the mint that no matter the technological innovation, location still drives transportation speed, timeliness and efficacy. I can utilize this important realization with other aspects of research and technology application and ensure my internal bias is reduced. The key takeaway I came away with is that no matter how much information or data is improved the location and distance of transporting goods and materials will always persist. The article One on One by (Roberts, 2004) is an interview with the vice president of the company Oracle, Greg Tennyson.This interview captures the strategies ND leveraging of technologies in order to increase profitability of logistics operations within the realm of shipping raw materials globally. Specifically useful to my research is the discussion of â€Å"offspring† logistics operations. â€Å"Offspring† is described as transitioning logistics operations fr om the L] . S. To overseas markets, which offer vastly cheaper labor and trade restrictions. India is a primary market utilized by Oracle in this article. I believe that the perspective of cheaper overseas labor cannot be mistaken for the advances in informational technologies.This article will further reduction of y bias by tempering it with different empirical data supporting a theory that labor reductions in logistics may be due to cheaper labor in different regions of the globe. Despite the increase of offspring, the article still reiterates the central importance of communications technologies in order to synchronize the complex and intricate global transactions. The article from (Atkinson, 1 999) discusses the usage of communications and web-based technologies in order to develop logistics cost savings.The specific technology discussed in this article is Collaborative, Planning, Forecasting and Replenishment (CUFF). CUFF is the Internet based communications business solution u tilized by many fortune 500 companies today including Wall-Mart. The article details the successes of this technology from a cost savings perspective. The key point of this study in relation to my research is the importance of communications, not only externally to a logistics company, but internal as well. Communication enables timely delivery of information and ensures needs are forecasted and met.Technology and automation coupled with advanced communications technology is integral with the future Of logistics. In another article from (Atkinson, 1999), the author cuisses the expansion of the Internet in the logistics sector and the development and maturation of E-commerce. E-commerce is commerce that takes place between two or more organizations electronically. Early on in the inception of the Internet, many logistics companies were reticent to employ Internet based technologies in fear of security issues.Those companies that mastered the web-based technologies were rewarded by re duction in costs and labor efficiencies. The main learning point of this article is the fact that it correlates using technology and the Internet with reducing labor forces and Ewing more productive with less people employed. With the article by (McGovern, 1998), it is undeniable that the Internet is a crucial area for growth in logistics sectors. The only argument about the Internet as it pertains to the logistics industry is how to apply and utilize it in the best manner possible.The main issue with this article in regards to my research is the fact that it pertains mainly to visibility and communication and does not include hard data with employment statistics. While can use the perspective and insight provided additional support with numerical data will be more relevant to use in my study. I can always use and improve upon the information provided in this study. The article is quite old (1998) in reference to the subject of my research.I will be able to use all information here as either pre or post Internet and in that way even older articles can provide much-needed context. With infill article by (Williams, 2001) he provides additional support to the origins of the Internet. This article analyzes three components of logistics, inventory control, order processing and transportation. The Internet impacts all three areas in positive ways in the article. One area this article could be of more assistance to my research reports is again more usage of empirical data and numerical figures on employment and how that relates to performance.In conclusion the articles summarized together present a consistent gap of research in regards to how informational technologies impact the employment of logistics employees. I believe can utilize the data surveys from the Bureau of Labor Statistics and incorporate supporting documentation to identify that informational technologies did and continue to reduce employment opportunities within the united States of America. Can cont inue to refine my literature or more clearly define the scope of my research question in order o synchronize the availability of research available at this time.

Tuesday, July 30, 2019

3rd World Short Story Analysis

Author Summary Anoma is a university graduate who has hopes of becoming a teacher. However, her parents, especially her mother, had other ideas and wanted to give her in marriage. Mrs. Wickramasinghe’s cousin finds a suitable match for Anoma. He is Fredrick Dias, a barrister who has just come back from England. He is said to be good looking, educated, rich, and from a good family background. Fredrick, also known as Wimal, visits Anoma, along with his aunt. After some traditions of welcoming the intended groom, Anoma and Wimal strike up a conversation.Later she agrees that she likes Wimal and they are soon engaged to each other. Months pass happily between the engagement and wedding. The wedding is a gala affair and soon afterwards, Anoma leaves for her honeymoon with Wimal. They spend the first night in the quest house in Kaduwela and thereafter proceed to Grand Hotel, Nuwara Eliya. They go for walks and drives and Anoma enjoys herself except for two factors; two phone calls t o Wimal from an unknown stranger and the fact that Wimal makes no move to make love to her.Upon confrontation, he informs her that the caller is a good friend who is not a girl and excuses himself saying they have a lifetime ahead of them to make love. They come back to Colombo and live in an old Walawuwa where Anoma enjoys numerous comforts. Wimal is kind to her but still is distant from her. Anoma parents visit her and are delighted about their daughters’ new lifestyle. Anoma does not confine in her mother but speaks to a friend about her worries. When she does, she finds out that her husband is a homosexual. Analysis PlotThere is only one plot line for the story: An arranged marriage of a girl to a man who turn out to be a homosexual. The story is written in a chronological order with plot devices. There is a flashback at the end of the story when Anoma’s friend narrates to her what she overheard about Wimal. There is also some foreshadowing and they are; a stranger calling Wimal twice while he is on his honeymoon, Wimal’s words that they will have time later on to make love, and the fact that he is an educated, good looking, rich man who is single. Standard Pyramidal Plot Pattern Feature Article –Â  The Plane of the Sleeping Beauty AnalysisExposition- introduction of the characters, setting and main conflict * Anoma Wickramasinghe – University graduate with upper second class degree, has a career as a teacher, and is a Buddhist. * Mrs. Wickramasinghe- Traditional mother who believed it was best for her daughter to be married and stabilized. * Mr. Wickramasinghe- He is a typical Sri Lankan father who remains passive while the mother sorts the issue of marriage for their daughter. * Fredrick Dias- Also referred to as Wimal, Barrister from England, orphan, Christian, did not believe in love but wanted security. * Mrs.Dias- Traditional aunt, took good care of Wimal, make Wimal acquiesce her wishes. Main conflict- Anoma experiences an internal conflict. She is curious about many things like who the stranger who calls on her husband during their honeymoon is and why her husband does not want to make love to her. She finally discovers that her husband is a homosexual. Rising Action-Develops the conflict and creates suspense Develops the conflict- Anoma continues to feel ill used about the person who keeps calling her husband and Wimal’s reluctance to make love to her. Creates suspense-Anoma is suspicious about the calls Wimal receives.Climax- the turning point of the story where the main character comes face to face with an issue Occurs at the very end of the story and therefore is also the resolution/denouncement. Anoma confines in her friend and finds out that her husband is a homosexual. Mini- climax- When Anoma questions Wimal whether they are going to have sex and he replies saying they have a lifetime ahead of them. Setting Time- Place- Anoma and Wimal choose Nuwara Eliya as their honeymoon destination. This is a very common and cold location. This acts as a symbol as well defining Wimals character, Wiimal is distant from Anoma even during their honeymoon.Even with shivering temperatures, Wimal refuses to cuddle Ano ma and keep her warm. Social Environment- It is traditional because Mrs. Wickramasinghe wanted her daughter married to a person of the same caste regardless of his qualities. It is restrictive because Wimal is distant from Anoma even during their honeymoon. The story is set a reserved setting because everything is rigid and formal. Weather- There is not of much significance about the weather except Anoma’s and Wimal’s Honeymoon destination. The cold climate in Nuwara Eliya reflects the distance and the lack of intimateness between the newlyweds. Character-The Protagonist is the story is Anoma Wickramasinghe. She can be considered as reserved because she did not engage in an affair while she was at University. She is also shy upon meeting her intended husband but loses her shyness soon when they start talking to each other. She is simple and does not like much of a hassle. This is evident by the relief she feels when she leaves the wedding and sets off on the honeymoon with Wimal. She is also a patient person because she is willing to get married in an arranged fashion and waits for the love to grow. Anoma is also an obedient wife because she accompanies her hs

Civil disobedience Essay

In Ralph Waldo Emerson’s essay â€Å"Self Reliance† and Henry David Thoreau’s essay â€Å"Civil Disobedience,† both transcendentalist thinkers speak about being individual and what reforms and changes need to be made in a conformist society. Thoreau elaborates more on the relationship between individuality and society and to break free from conformity. Meaning to take a stance and influence man to make a social change. Emerson leans more towards nature and the connection to spirituality. He exclaims that for individuality there has to be some sort of understanding of oneself to make an impact – which is the basic nature. He believes that man’s connection to nature is the most valuable source of life because nature is what links man to God, â€Å"the divine providence.† Both authors express the need for individuality in order to possess a strong moral and become whole through their transcendentalist ideals. In Emerson’s â€Å"Self-Reliance,† social responsibility is important. The meaning behind this is that there is a time in man’s life when he will finally realize that he has a purpose, a destiny, and the responsibility to achieve goals as long as there is a tap into spiritual nature. Emerson states, â€Å"The strongest man in the world is he who stands alone,† which references the belief of individualism. Emerson notes that famous men and women are often misunderstood simply because of their opinion, ideas, and thoughts; however, this misunderstanding is why they are so respected. One large point in â€Å"Self Reliance† is that humans should not conform to society but to be independent in mind. Emerson stresses that one should connect with nature to maintain peace of mind and individual mentality. In â€Å"Civil Disobedience,† Thoreau meets a man while serving time in prison who has been locked away for burning down a barn. Even so, Thoreau sees his cellmate as an honest man by simply trusting his own intuition. Furthermore, Thoreau writes, â€Å"The government is best which governs least,† in lines 2-3, which is based on the belief that people should not conform to society but stay independent and embrace their own beliefs, goals, values, and morals. Both â€Å"Self Reliance† and â€Å"Civil Disobedience† are relevant in modern society because they discourage conformity, which is a big problem in the world today. Humans tend to lean with the majority, but should be taught to stand their ground. Both essays also mention the government and the problems involved in it. Since they were written, government has not improved; it may have even worsened. Government is best when it governs least- that perspective should still be applied to today’s government. Now, the government tries to constantly control every aspect in everyone’s life, but like Thoreau states, it should allow its people to decide major issues.

Monday, July 29, 2019

Human Population Growth and the Environment Essay

Human Population Growth and the Environment - Essay Example of humans on earth, population increase has always been attributed to a number of factors namely: fertility, longevity infact mortality, animal domestication plus agriculture, industrial revolution, nutrition, and medicine. In the past, human population on the surface of the earth was scarce. According to Hunter (2000), the above factors contributed greatly to increase in population in the past 200 years. Human population was constantly kept small due to occurrence of diseases that were incurable, natural disasters that could not be avoided, high infant mortality rate, poor fertility rates, poor agricultural farming, poor nutrition, and poor knowledge in medicine. With human beings becoming more revolutionized and discovering tools which could help them in increasing their production, a lot of changes made in the field of agriculture contributed to increase in food production and hence longevity. Majority of groups whom were pastoralists on seeing the benefits of agriculture, settled in various placed, coupled with the technological know-how of the given art, specialization led to growing up of various fields which encouraged a given population to be concentrated in a town center in order to benefit fr om the given activity. Trade came hand in hand with agriculture and humans diversified in their search to get more stability in their lives. With stability, came the need to research on common problems that faced the given population and provide remedies. These are some of the things that led into research in technological and medical advancements that resulted in increased cure rate and less death rate in all ages. Around 200 years ago, with the coming of industrial revolution, human beings population started to grown as they discovered that the world offered more resources. Around that time, infant mortality had been reduced as research in medicine had been advanced. In addition, mortality in general reduced based on more diseases getting cured. As human

Sunday, July 28, 2019

The Influence of the Columian Exchange throughout North America Research Paper

The Influence of the Columian Exchange throughout North America leading up to the Revolutionary War - Research Paper Example Europeans started this contact and habitually decided its terms. For Africans and Native Americans, their life in America was to be steeped in tragedy for the next three centuries. Disease The most devastating consequences of the lengthy isolation of the American continent were to be biological. Africans, Asians and Europeans had been exposed to each other’s maladies for centuries because they frequently came into contact with each other through trade practices (Peabody and Grinberg, 2007). By 1500, these three ethnic groups had acquired immune systems that moderately protected them from most illnesses. Native Americans, who remained unexposed to other ethnic groups, were larger in body size and healthier than Columbus and his co-partners in 1492, when the two groups first met (McNeill, 2012). However, their long isolation from other ethnicities meant that their bodies had no immunity against the diseases that other ethnic groups quickly mastered. European and African maladies would soon begin ravaging the American Indian tribes. Smallpox quickly became the largest killer, even though influenza and measles also decimated huge populations of American Indians. The native population of Mexico, for example, was approximately 17 million, when in 1519 Cortes and his men reached the land (McNeill, 2012). A century later, there were less than a million natives remaining in Mexico, simply as a result of communicable diseases. It has been estimated that the entire Native American population was reduced by 90 percent within the first 100 years after European travelers first reached the American shores. This fact hugely contributed to the subsequent European domination of the continent. The Introduction of Food Crops One of the few advantages of the Columbian Exchange era was the exchange of knowledge of different crops between different ethnic groups. Prior to the Columbian Exchange, there had been no potatoes cultivated in Europe (Hughes, 2003). The Columbian Exch ange also hugely expanded the scale of the production of some well-liked drugs as well as crops such as sugar, coffee and tobacco use to many Europeans (Crosby, 2008). In the next few centuries potatoes would grow to be a major ingredient of Russian Vodka and the staple food in Ireland (Hughes, 2003). Chocolate, a plant only previously grown in the Americas, soon became a favorite in Europe. Plants like peanuts and maize were also transported to Africa by Portuguese. These robust crops could be cultivated in arid regions that hardly sustained any other type of edible plants (Hall, 2003). There are many historians who believe that the introduction of maize in Africa resulted in an increase in population. Before Columbus reached the American shores in 1492, the Americas had many different domesticated crops such as cassava, maize (corn), squashes, potatoes and different types of beans. Other plants that were less actively cultivated included papaya, sweet potato, avocado, pineapple, t omato, guava, chili peppers, peanuts, and cacao (McNeill, 2012). In spite of maize’s success in readapting to the African climate, the potato did not do as well in Africa. The potato would have a stronger impact in developing the Eurasian populations (Bond,

Saturday, July 27, 2019

A critical examination of the Korean Dietary regimen Essay

A critical examination of the Korean Dietary regimen - Essay Example Due to the various disasters and flood in Korea the dietary regimen in the country has been affected and is becoming worse. The Korean government gave more importance to the guns for fighting than food for people, which has changed the dietary procedures of the people living there. The Korean people have a-lot of vegetables and meat, which in turn gives them protein which is very essential and often suggested by doctors for a healthy living. The Korean people make various types of granule alcohol, which is most commonly and particularly known as Soju. The Korean females are not allowed to imbibe alcohol however on the other hand alcoholism is not unidentified and most commonly used amongst the Korean men. A great proportion of the Korean males go through from various types of kidney or liver tribulations and stomach cancer as a result of too much utilization of intoxicating drinks or alcohol. The various types of Korean diet puts forward a hale and hearty assemblage of foodstuff, which is short in animal fat in addition to far above the ground in fiber, however a elevated usage of sodium is instituted in Korean dishes which has soy sauce, hot sauce, bean paste, and fish sauce but too much sodium utilization is considered to be the root to various physical conditions. Sodium usage has been a sign of the fact that using saline excess can increase the probability of elevated blood pressure, stroke, heart failure, kidney disease, diabetes, liver fat, fragile bones, asthma, premature deaths and stomach cancer. Usages of too much sugary stuff, which have a high rate of sugar, cause diabetics. Koreans have started using salt to conserve food, and the people with a high blood pressure problem or diabetes are over and over again told to decrease the sodium in their food. Different consumption examples become a sign of dissimilar dietary mores through out the world.

Friday, July 26, 2019

The Numbing of the American Mind Assignment Example | Topics and Well Written Essays - 500 words

The Numbing of the American Mind - Assignment Example In response to the psychological tuning, the society describes each event in relation to the presentation offered by the media industry. The success of the media in shaping the society is evident in the way it shapes the people’s behavior. The shaping of the new buildings and towns has led to the erasure of the history. Currently, in order to access the traditional architecture of a city or town, the remaining areas which are considered forgotten to play a pivotal role in shaping the traditional architecture? The crucial question The main problem with the whole situation is that humans are tuned to reason or behave in a way that does not encourage thinking outside the box. Consequently, humanity tends to behave the way the media has turned them. In response to the psychological tuning, the society describes each event in relation to the presentation offered by the media industries why the change of buildings and can the traditional buildings be conserved. The approach has ensured that events that occurred in the past cannot be revisited easily because of the continual erasure of the features that may bring the issue to remembrance. In fact, issues such as having animals in captivity have eliminated the need for nature. Wolfs and dolphins in captivity will behave differently compared to those in the wild (Zengotita 34). The wild adventure is not thrilling anymore as a result of the ever-increasing access to caged animals. Natural things have become limited and precious with some being considered icons. In the past, camp out involved a close encounter with nature, this has changed exceedingly. However, the current societal changes have made it impossible for the events to be undertaken. The leading force of discouragement is the media. Consequently, the media have taught the challenges and tragic encounter as the only probable encounter creating fear within the society. Thus, changing preference from nature to visiting the animal orphanage or caged animals. In addition, even the advertisement of products and services are exaggerated because they depict lack of realism. For instance, the advertisement of SUVs includes depicting of the abilities that the car cannot obtain such as crossing rivers and extremely difficult terrain.

Thursday, July 25, 2019

Emotional development Essay Example | Topics and Well Written Essays - 2500 words

Emotional development - Essay Example A multi-agency team, consisting of a play worker, a nursery nurse, and a teaching assistant, worked with Rose to help her come out of her emotional trauma. Fun tools available to the kids and the types of play activities were observed, along with special consideration given to the team’s supervision and other adults’ direct or indirect contribution in play activities. The team saw positive outcomes. 2. Importance of Attachment and Its Effects on Brain The importance of attachment of infants and toddlers with the parents or caregivers cannot be denied in terms of healthy mental development. â€Å"The impact of attachment disruptions on children’s lives can be devastating and far-reaching†, state Kaduson and Schaefer (2006: 148). When the child enjoys vigorous attachment with the caregiver, he will learn how to offer and maintain a devoted and compassionate relationship that benefits the child in both the short and long run. He will learn to rely on others. The long term outcome will be a contented, independent, and confident personality. On the other hand, when the child does not have an attachment bonding with the caregiver, he will learn to fear, to feel guilty, and to feel the world as a place which is not safe and where his needs are not going to be met. The long term outcome is a timid personality suffering from inferiority complex. Hence, we see that the attachment between the infant/toddler and the caregiver has significant importance in terms of personality developmental outcomes. 3. Components of Healthy Attachment According to Kaduson and Schaefer (2006: 267), â€Å"a healthy attachment allows for a balance between the toddler’s developmentally appropriate exploratory drive and need for emotional reassurance and support.† A healthy attachment between an infant and the caregiver has two components. the infant’s needs that he wants the caregiver to fulfill the timely response of the caregiver The attachmen t occurs when the infant has a sure feeling that the parent or the caregiver will always be there to fulfill his needs of hunger, thirst, clothing, cleaning, and the like. When the caregiver gives a timely response to the infant, this gives rise to trust. The infant forms trust on the caregivers when his needs are fulfilled and forms an attachment with them. When the infant feels otherwise, he learns to mistrust others. Attachment also includes such efforts by the infant with which he tries to remain in physical contact with the caregiver, for example, holding a finger tight, clinging to the bosom, sticking to the lap, and etcetera. 4. Parenting Styles and Attachment Parents and caregivers can play very important role in developing and maintaining healthy bonds of attachment with the children (Maccoby 1992). When the caregivers give the children the chance to share their problems and express their needs to them, they are actually ensuring them that they can always depend upon them f or a solution. Parents use many approaches while parenting, most common of which are authoritarian, authoritative, permissive, and uninvolved styles. Attachment occurs when the child feels secure with his parent’s parenting style (Strage & Brandt 1999). Authoritarian parents leave no room for reasoning and communication. Authoritative parents leave room for objections. They listen to their child’s ideas and reason with them due to which the child

Production Possibilities Curve Essay Example | Topics and Well Written Essays - 1000 words

Production Possibilities Curve - Essay Example In coming up with the production possibility curve, a number of assumptions have to be made. The model assumes that the goods being combined are only two. In addition, the curve also implies that they two goods can be interchanged. Interchanging of the two products does not affect the production of products and services desired. Furthermore, an assumption is also made that the factors of production do not vary. Similarly, an assumption is made that the period is limited, there are no technological changes, and all resources are utilized (Tucker, 2011). Most countries strive to manufacturer goods and services according to the production possibility curve. It is impossible for countries to produce goods and services beyond the production possibility curve. In addition, countries that produce in the curve are said to be inefficient. Precisely, it can be claimed that a country is not utilizing its resources adequately. The curve can shift outwards due to changes in a number of factors. Some of the factors may include advancement in technology and innovation of new methods of production (Russell, 2013).Furthermore, an outward shift can be brought about due to improved Gross Domestic Product and the economy in general. However, an inward shift of the curve is different. It is possible for the curve to shift inwards due to lack of sufficient factors of production. For instance, some countries entirely depend on oil as their primary factor of production. If such factors of production are, the production possibility curve will shift inward s. In addition, natural calamities can influence the curve to shift inwards (Tucker, 2011). Natural disasters lead to loss of lives. Consequently, the nation’s labor force is significantly reduced. Hence, a nation’s factor of production is significantly reduced forcing the curve to shift inwards. In addition, the natural calamities affect industry from operating. Most of the available

Wednesday, July 24, 2019

Marketing Essay Example | Topics and Well Written Essays - 2500 words - 12

Marketing - Essay Example Along with glamour and excitement, event planning requires hard work and dynamism. Even this marketing type requires creativity, professionalism and distinction and differs to some extent from marketing of conventional products. XYZ Company, that is analyzed in this essay has been asked to handle the launch of a new brand of PDAs later this year. This essay focuses on highlighting the marketing plan for the events and evaluates and analyzes the marketing mix for the launch. Traditionally event marketing was associated with sponsoring a sports event or an industry conference. The concept of event marketing has undergone change in the recent years. Event marketing is known by other names like experiential marketing, brand experience marketing or live marketing. The purpose of event marketing are also described in the essay and may differ across products or industries but the ultimate goal is to add value to any live event. Importance of event marketing is analyzed. The right event can open the prospects’ mind to the marketing message. For a well known brand event marketing becomes easy as any kind of advertisement is appropriate. Promoting brands for the youth through competitions or concerts is a popular approach to brand marketing. These help to increase the sales and popularity of the manufacturers. The objective of event marketing is that consumers must have a positive approach to the product and the brand. The launch of the product was also designed in the essay to raise the profile of the XYZ company.

Tuesday, July 23, 2019

American Presidents Essay Example | Topics and Well Written Essays - 750 words

American Presidents - Essay Example There is no doubt that the contrast between the two is stark, and that Bill Clinton was a far superior leader and far more worthy of the respect, admiration and gratitude of Americans. The reputation of the United States is the first area in which the difference between Bush and Clinton is stark. Under Clinton, the United States respected multilateral agreements, sought consensus among the international community on matters of great import, projected the power of the United States in a non-arrogant manner, and respected human rights. For example, Clinton pursued and successfully achieved treaties that grew and strengthened international trade, such as the North American Free Trade Agreement (NAFTA) and the General Agreement on Tariffs and Trade (GATT). He also helped negotiate the Kyoto Protocol against global warming. In addition, he utilized U.S. military power when necessary and within the context of NATO, as was the case in Kosovo. Because of his active solicitation of and respect for the opinions and influence of other nations, the United States enjoyed a high degree of respect and admiration throughout the world. On the contrary, Bush has led ... backpedaled on security assurances that had been made to North Korea, effectively provoking that country to resume nuclear weapons development and causing them to return to caustic anti-American propaganda and posturing. In addition, he pulled out of the Kyoto accord on global warming, effectively leaving much of the rest of the world high and dry when it comes to efforts to fight against the growing environmental calamity. Most importantly, he thumbed his nose at the world when deciding to unilaterally invade Iraq on a false pretext, and then arrogantly recast the Iraq war as the front line in the "war on terror" when it became apparent that his WMD pretext for the invasion was bogus. In short, the Bush presidency has personified the "ugly American" stereotype of the loose cannon cowboy blindly shooting first and asking questions later. As such, America's reputation in the world has never been lower. A second issue that illustrates a wide gulf between the administrations of George W. Bush and Bill Clinton is that of the economy and the federal budget. Under Clinton, America's economy sustained the longest and strongest economic expansion in history, adding jobs at an unprecedented clip, growing people's investment portfolios astronomically, and prompting an improvement in the quality of life of Americans at all levels of the socio-economic spectrum. Clinton got elected largely based on a groundswell of popular discontent with the state of the economy under his predecessor, George H.W. Bush. He did not disappoint, as few would argue that the economy did not grow at an amazing clip that benefited nearly all Americans. By contrast, George W. Bush has led America into a period of economic stagnation, essentially returning the country to the state it was in when Clinton

Monday, July 22, 2019

Assumptions and Fallacies Essay Example for Free

Assumptions and Fallacies Essay †¢ What are assumptions? How do you think assumptions might interfere with critical thinking? What might you do to avoid making assumptions in your thinking? Assumption is an idea one believes to be true based on prior experience or ones belief systems. (Elder Paul, 2002) Assumptions are a part of our belief system but we don’t know that they are true or not. Assumptions are a vital part of our critical thinking. If we used assumptions all the time then we would not be able to use critical thinking. It doesn’t matter where we are at, it is imperative to know all the facts prior to drawing any kind of conclusion or it becomes an assumption. It may be difficult at times to utilize critical thinking but by keeping an open minded aspect will help to prevent assumptions. There is nothing worse than making an assumption and then to be confronted by someone who has all the facts can shatter your confidence. You can avoid this by researching all the facts and utilizing your critical thinking abilities to cover every corner and aspect of your idea or topic. This is the key to keep from making assumptions. †¢ What are fallacies? How are fallacies used in written, oral, and visual arguments? What might you do to avoid fallacies in your thinking? Fallacies are deceptive or misleading arguments that are untrue or unreliable. Fallacies are mainly used to help support a person’s argument when they can’t find factual evidence to back up their statements. Fallacies can be used in many different ways. They are used on purpose in fictional writing and magazines like People. Fallacies can be orally used by someone when they are telling a firsthand story but are only versed in their side so it may come off as unintentional. I see fallacies being mostly used visually popliteal ads and propaganda media campaigns. They get away with most of these fallacies because the amount of time it takes to research the truth usually takes too long before the psychological damage is already done on the public. People tend to trust what others say unless they have found previous fallacies in their statements. I avoid believing fallacies by being conservative in my thoughts. If I see something that I might consider to be fallacious by my past experiences then I do the research to find out the facts. Fallacies and assumptions hold the same key as research will reveal them all.

Sunday, July 21, 2019

Performance Measure of PCA and DCT for Images

Performance Measure of PCA and DCT for Images Generally, in Image Processing the transformation is the basic technique that we apply in order to study the characteristics of the Image under scan. Under this process here we present a method in which we are analyzing the performance of the two methods namely, PCA and DCT. In this thesis we are going to analyze the system by first training the set for particular no. Of images and then analyzing the performance for the two methods by calculating the error in this two methods. This thesis referred and tested the PCA and DCT transformation techniques. PCA is a technique which involves a procedure which mathematically transforms number of probably related parameters into smaller number of parameters whose values dont change called principal components. The primary principal component accounts for much variability in the data, and each succeeding component accounts for much of the remaining variability. Depending on the application field, it is also called the separate Karhunen-Loà ¨ve transform (KLT), the Hotelling transform or proper orthogonal decomposition (POD). DCT expresses a series of finitely many data points in terms of a sum of cosine functions oscillating at different frequencies. Transformations are important to numerous applications in science and engineering, from lossy compression of audio and images (where small high-frequency components can be discarded), to spectral methods for the numerical solution of partial differential equations. CHAPTER 1 INTRODUCTION 1.1 Introduction Over the past few years, several face recognition systems have been proposed based on principal components analysis (PCA) [14, 8, 13, 15, 1, 10, 16, 6]. Although the details vary, these systems can all be described in terms of the same preprocessing and run-time steps. During preprocessing, they register a gallery of m training images to each other and unroll each image into a vector of n pixel values. Next, the mean image for the gallery is subtracted from each  and the resulting centered images are placed in a gallery matrix M. Element [i; j] of M is the ith pixel from the jth image. A covariance matrix W = MMT characterizes the distribution of the m images in Ân. A subset of the Eigenvectors of W are used as the basis vectors for a subspace in which to compare gallery and novel probe images. When sorted by decreasing Eigenvalue, the full set of unit length Eigenvectors represent an orthonormal basis where the first direction corresponds to the direction of maximum variance i n the images, the second the next largest variance, etc. These basis vectors are the Principle Components of the gallery images. Once the Eigenspace is computed, the centered gallery images are projected into this subspace. At run-time, recognition is accomplished by projecting a centered  probe image into the subspace and the nearest gallery image to the probe image is selected as its match. There are many differences in the systems referenced. Some systems assume that the images are registered prior to face recognition [15, 10, 11, 16]; among the rest, a variety of techniques are used to identify facial features and register them to each other. Different systems may use different distance measures when matching probe images to the nearest gallery image. Different systems select different numbers of Eigenvectors (usually those corresponding to the largest k Eigenvalues) in order to compress the data and to improve accuracy by eliminating Eigenvectors corresponding to noise rather than meaningful variation. To help evaluate and compare individual steps of the face recognition process, Moon and Phillips created the FERET face database, and performed initial comparisons of some common distance measures for otherwise identical systems [10, 11, 9]. This paper extends their work, presenting further comparisons of distance measures over the FERET database and examining alternative way of selecting subsets of Eigenvectors. The Principal Component Analysis (PCA) is one of the most successful techniques that have been used in image recognition and compression. PCA is a statistical method under the broad title of factor analysis. The purpose of PCA is to reduce the large dimensionality of the data space (observed variables) to the smaller intrinsic dimensionality of feature space (independent variables), which are needed to describe the data economically. This is the case when there is a strong correlation between observed variables. The jobs which PCA can do are pred iction, redundancy removal, feature extraction, data compression, etc. Because PCA is a classical technique which can do something in the linear domain, applications having linear models are suitable, such as signal processing, image processing, system and control theory, communications, etc. Face recognition has many applicable areas. Moreover, it can be categorized into face identification, face classification, or sex determination. The most useful applications contain crowd surveillance, video content indexing, personal identification (ex. drivers license), mug shots matching, entrance security, etc. The main idea of using PCA for face recognition is to express the large 1-D vector of pixels constructed from 2-D facial image into the compact principal components of the feature space. This can be called eigen space projection. Eigen space is calculated by identifying the eigenvectors of the covariance matrix derived from a set of facial images(vectors). The details are described i n the following section. PCA computes the basis of a space which is represented by its training vectors. These basis vectors, actually eigenvectors, computed by PCA are in the direction of the largest variance of the training vectors. As it has been said earlier, we call them eigenfaces. Each eigenface can be viewed a feature. When a particular face is projected onto the face space, its vector into the face space describe the importance of each of those features in the face. The face is expressed in the face space by its eigenface coefficients (or weights). We can handle a large input vector, facial image, only by taking its small weight vector in the face space. This means that we can reconstruct the original face with some error, since the dimensionality of the image space is much larger than that of face space. A face recognition system using the Principal Component Analysis (PCA) algorithm. Automatic face recognition systems try to find the identity of a given face image according to their memory. The memory of a face recognizer is generally simulated by a training set. In this project, our training set consists of the features extracted from known face images of different persons. Thus, the task of the face recognizer is to find the most similar feature vector among the training set to the feature vector of a given test image. Here, we want to recognize the identity of a person where an image of that person (test image) is given to the system. You will use PCA as a feature extraction algorithm in this project. In the training phase, you should extract feature vectors for each image in the training set. Let  ­A be a training image of person A which has a pixel resolution of M  £ N (M rows, N columns). In order to extract PCA features of  ­A, you will first convert the image into a pixel vector à A by concatenating each of the M rows into a single vector. The length (or, dimensionality) of the vector à A will be M  £N. In this project, you will use the PCA algorithm as a dimensionality reduction technique which transforms the vector à A to a vector !A which has a imensionality d where d  ¿ M  £ N. For each training image  ­i, you should calculate and store these feature vectors !i. In the recognition phase (or, testing phase), you will be given a test image  ­j of a known person. Let  ®j be the identity (name) of this person. As in the training phase, you should compute the feature vector of this person using PCA and obtain !j . In order to identify  ­j , you should compute the similarities between !j and all of the feature vectors !is in the training set. The similarity between feature vectors can be computed using Euclidean distance. The identity of the most similar !i will be the output of our face recogn izer. If i = j, it means that we have correctly identified the person j, otherwise if i 6= j, it means that we have misclassified the person j. 1.2 Thesis structure: This thesis work is divided into five chapters as follows. Chapter 1: Introduction This introductory chapter is briefly explains the procedure of transformation in the Face Recognition and its applications. And here we explained the scope of this research. And finally it gives the structure of the thesis for friendly usage. Chapter 2: Basis of Transformation Techniques. This chapter gives an introduction to the Transformation techniques. In this chapter we have introduced two transformation techniques for which we are going to perform the analysis and result are used for face recognition purpose Chapter 3: Discrete Cosine Transformation In this chapter we have continued the part from chapter 2 about transformations. In this other method ie., DCT is introduced and analysis is done Chapter 4: Implementation and results This chapter presents the simulated results of the face recognition analysis using MATLAB. And it gives the explanation for each and every step of the design of face recognition analysis and it gives the tested results of the transformation algorithms. Chapter 5: Conclusion and Future work This is the final chapter in this thesis. Here, we conclude our research and discussed about the achieved results of this research work and suggested future work for this research. CHAPTER 2 BASICs of Image Transform Techniques 2.1 Introduction: Now a days Image Processing has been gained so much of importance that in every field of science we apply image processing for the purpose of security as well as increasing demand for it. Here we apply two different transformation techniques in order study the performance which will be helpful in the detection purpose. The computation of the performance of the image given for testing is performed in two steps: PCA (Principal Component Analysis) DCT (Discrete Cosine Transform) 2.2 Principal Component Analysis: PCA is a technique which involves a procedure which mathematically transforms number of possibly correlated variables into smaller number of uncorrelated variables called principal components. The first principal component accounts for much variability in the data, and each succeeding component accounts for much of the remaining variability. Depending on the application field, it is also called the discrete Karhunen-Loà ¨ve transform (KLT), the Hotelling transform or proper orthogonal decomposition (POD). Now PCA is mostly used as a tool in exploration of data analysis and for making prognostic models. PCA also involves calculation for the Eigen value decomposition of a data covariance matrix or singular value decomposition of a data matrix, usually after mean centring the data from each attribute. The results of this analysis technique are usually shown in terms of component scores and also as loadings. PCA is real Eigen based multivariate analysis. Its action can be termed in terms of as edifying the inner arrangement of the data in a shape which give details of the mean and variance in the data. If there is any multivariate data then its visualized as a set if coordinates in a multi dimensional data space, this algorithm allows the users having pictures with a lower aspect reveal a shadow of object in view from a higher aspect view which reveals the true informative nature of the object. PCA is very closely related to aspect analysis, some statistical software packages purposely conflict the two techniques. True aspect analysis makes different assumptions about the original configuration and then solves eigenvectors of a little different medium. 2.2.1 PCA Implementation: PCA is mathematically defined as an orthogonal linear transformation technique that transforms data to a new coordinate system, such that the greatest variance from any projection of data comes to lie on the first coordinate, the second greatest variance on the second coordinate, and so on. PCA is theoretically the optimum transform technique for given data in least square terms. For a data matrix, XT, with zero empirical mean ie., the empirical mean of the distribution has been subtracted from the data set, where each row represents a different repetition of the experiment, and each column gives the results from a particular probe, the PCA transformation is given by: Where the matrix ÃŽÂ £ is an m-by-n diagonal matrix, where diagonal elements ae non-negative and W  ÃƒÅ½Ã‚ £Ãƒâ€šÃ‚  VT is the singular value decomposition of  X. Given a set of points in Euclidean space, the first principal component part corresponds to the line that passes through the mean and minimizes the sum of squared errors with those points. The second principal component corresponds to the same part after all the correlation terms with the first principal component has been subtracted from the points. Each Eigen value indicates the part of the variance ie., correlated with each eigenvector. Thus, the sum of all the Eigen values is equal to the sum of squared distance of the points with their mean divided by the number of dimensions. PCA rotates the set of points around its mean in order to align it with the first few principal components. This moves as much of the variance as possible into the first few dimensions. The values in the remaining dimensions tend to be very highly correlated and may be dropped with minimal loss of information. PCA is used for dimensionality reduction. PCA is optimal linear transformation technique for keep ing the subspace which has largest variance. This advantage comes with the price of greater computational requirement. In discrete cosine transform, Non-linear dimensionality reduction techniques tend to be more computationally demanding in comparison with PCA. Mean subtraction is necessary in performing PCA to ensure that the first principal component describes the direction of maximum variance. If mean subtraction is not performed, the first principal component will instead correspond to the mean of the data. A mean of zero is needed for finding a basis that minimizes the mean square error of the approximation of the data. Assuming zero empirical mean (the empirical mean of the distribution has been subtracted from the data set), the principal component w1 of a data set x can be defined as: With the first k  Ãƒ ¢Ã‹â€ Ã¢â‚¬â„¢Ãƒâ€šÃ‚  1 component, the kth component can be found by subtracting the first k à ¢Ã‹â€ Ã¢â‚¬â„¢ 1 principal components from x: and by substituting this as the new data set to find a principal component in The other transform is therefore equivalent to finding the singular value decomposition of the data matrix X, and then obtaining the space data matrix Y by projecting X down into the reduced space defined by only the first L singular vectors, WL: The matrix W of singular vectors of X is equivalently the matrix W of eigenvectors of the matrix of observed covariances C = X XT, The eigenvectors with the highest eigen values correspond to the dimensions that have the strongest correlation in the data set (see Rayleigh quotient). PCA is equivalent to empirical orthogonal functions (EOF), a name which is used in meteorology. An auto-encoder neural network with a linear hidden layer is similar to PCA. Upon convergence, the weight vectors of the K neurons in the hidden layer will form a basis for the space spanned by the first K principal components. Unlike PCA, this technique will not necessarily produce orthogonal vectors. PCA is a popular primary technique in pattern recognition. But its not optimized for class separability. An alternative is the linear discriminant analysis, which does take this into account. 2.2.2 PCA Properties and Limitations PCA is theoretically the optimal linear scheme, in terms of least mean square error, for compressing a set of high dimensional vectors into a set of lower dimensional vectors and then reconstructing the original set. It is a non-parametric analysis and the answer is unique and independent of any hypothesis about data probability distribution. However, the latter two properties are regarded as weakness as well as strength, in that being non-parametric, no prior knowledge can be incorporated and that PCA compressions often incur loss of information. The applicability of PCA is limited by the assumptions[5] made in its derivation. These assumptions are: We assumed the observed data set to be linear combinations of certain basis. Non-linear methods such as kernel PCA have been developed without assuming linearity. PCA uses the eigenvectors of the covariance matrix and it only finds the independent axes of the data under the Gaussian assumption. For non-Gaussian or multi-modal Gaussian data, PCA simply de-correlates the axes. When PCA is used for clustering, its main limitation is that it does not account for class separability since it makes no use of the class label of the feature vector. There is no guarantee that the directions of maximum variance will contain good features for discrimination. PCA simply performs a coordinate rotation that aligns the transformed axes with the directions of maximum variance. It is only when we believe that the observed data has a high signal-to-noise ratio that the principal components with larger variance correspond to interesting dynamics and lower ones correspond to noise. 2.2.3 Computing PCA with covariance method Following is a detailed description of PCA using the covariance method . The goal is to transform a given data set X of dimension M to an alternative data set Y of smaller dimension L. Equivalently; we are seeking to find the matrix Y, where Y is the KLT of matrix X: Organize the data set Suppose you have data comprising a set of observations of M variables, and you want to reduce the data so that each observation can be described with only L variables, L Write as column vectors, each of which has M rows. Place the column vectors into a single matrix X of dimensions M ÃÆ'- N. Calculate the empirical mean Find the empirical mean along each dimension m = 1,  ,  M. Place the calculated mean values into an empirical mean vector u of dimensions M ÃÆ'- 1. Calculate the deviations from the mean Mean subtraction is an integral part of the solution towards finding a principal component basis that minimizes the mean square error of approximating the data. Hence we proceed by centering the data as follows: Subtract the empirical mean vector u from each column of the data matrix X. Store mean-subtracted data in the M ÃÆ'- N matrix B. where h is a 1  ÃƒÆ'-  N row vector of all  1s: Find the covariance matrix Find the M ÃÆ'- M empirical covariance matrix C from the outer product of matrix B with itself: where is the expected value operator, is the outer product operator, and is the conjugate transpose operator. Please note that the information in this section is indeed a bit fuzzy. Outer products apply to vectors, for tensor cases we should apply tensor products, but the covariance matrix in PCA, is a sum of outer products between its sample vectors, indeed it could be represented as B.B*. See the covariance matrix sections on the discussion page for more information. Find the eigenvectors and eigenvalues of the covariance matrix Compute the matrix V of eigenvectors which diagonalizes the covariance matrix C: where D is the diagonal matrix of eigenvalues of C. This step will typically involve the use of a computer-based algorithm for computing eigenvectors and eigenvalues. These algorithms are readily available as sub-components of most matrix algebra systems, such as MATLAB[7][8], Mathematica[9], SciPy, IDL(Interactive Data Language), or GNU Octave as well as OpenCV. Matrix D will take the form of an M ÃÆ'- M diagonal matrix, where is the mth eigenvalue of the covariance matrix C, and Matrix V, also of dimension M ÃÆ'- M, contains M column vectors, each of length M, which represent the M eigenvectors of the covariance matrix C. The eigenvalues and eigenvectors are ordered and paired. The mth eigenvalue corresponds to the mth eigenvector. Rearrange the eigenvectors and eigenvalues Sort the columns of the eigenvector matrix V and eigenvalue matrix D in order of decreasing eigenvalue. Make sure to maintain the correct pairings between the columns in each matrix. Compute the cumulative energy content for each eigenvector The eigenvalues represent the distribution of the source datas energy among each of the eigenvectors, where the eigenvectors form a basis for the data. The cumulative energy content g for the mth eigenvector is the sum of the energy content across all of the eigenvalues from 1 through m: Select a subset of the eigenvectors as basis vectors Save the first L columns of V as the M ÃÆ'- L matrix W: where Use the vector g as a guide in choosing an appropriate value for L. The goal is to choose a value of L as small as possible while achieving a reasonably high value of g on a percentage basis. For example, you may want to choose L so that the cumulative energy g is above a certain threshold, like 90 percent. In this case, choose the smallest value of L such that Convert the source data to z-scores Create an M ÃÆ'- 1 empirical standard deviation vector s from the square root of each element along the main diagonal of the covariance matrix C: Calculate the M ÃÆ'- N z-score matrix: (divide element-by-element) Note: While this step is useful for various applications as it normalizes the data set with respect to its variance, it is not integral part of PCA/KLT! Project the z-scores of the data onto the new basis The projected vectors are the columns of the matrix W* is the conjugate transpose of the eigenvector matrix. The columns of matrix Y represent the Karhunen-Loeve transforms (KLT) of the data vectors in the columns of matrix  X. 2.2.4 PCA Derivation Let X be a d-dimensional random vector expressed as column vector. Without loss of generality, assume X has zero mean. We want to find a Orthonormal transformation matrix P such that with the constraint that is a diagonal matrix and By substitution, and matrix algebra, we obtain: We now have: Rewrite P as d column vectors, so and as: Substituting into equation above, we obtain: Notice that in , Pi is an eigenvector of the covariance matrix of X. Therefore, by finding the eigenvectors of the covariance matrix of X, we find a projection matrix P that satisfies the original constraints. CHAPTER 3 DISCRETE Cosine transform 3.1 Introduction: A discrete cosine transform (DCT) expresses a sequence of finitely many data points in terms of a sum of cosine functions oscillating at different frequencies. DCTs are important to numerous applications in engineering, from lossy compression of audio and images, to spectral methods for the numerical solution of partial differential equations. The use of cosine rather than sine functions is critical in these applications: for compression, it turns out that cosine functions are much more efficient, whereas for differential equations the cosines express a particular choice of boundary conditions. In particular, a DCT is a Fourier-related transform similar to the discrete Fourier transform (DFT), but using only real numbers. DCTs are equivalent to DFTs of roughly twice the length, operating on real data with even symmetry (since the Fourier transform of a real and even function is real and even), where in some variants the input and/or output data are shifted by half a sample. There are eight standard DCT variants, of which four are common. The most common variant of discrete cosine transform is the type-II DCT, which is often called simply the DCT; its inverse, the type-III DCT, is correspondingly often called simply the inverse DCT or the IDCT. Two related transforms are the discrete sine transforms (DST), which is equivalent to a DFT of real and odd functions, and the modified discrete cosine transforms (MDCT), which is based on a DCT of overlapping data. 3.2 DCT forms: Formally, the discrete cosine transform is a linear, invertible function F  : RN -> RN, or equivalently an invertible N ÃÆ'- N square matrix. There are several variants of the DCT with slightly modified definitions. The N real numbers x0, , xN-1 are transformed into the N real numbers X0, , XN-1 according to one of the formulas: DCT-I Some authors further multiply the x0 and xN-1 terms by à ¢Ã‹â€ Ã… ¡2, and correspondingly multiply the X0 and XN-1 terms by 1/à ¢Ã‹â€ Ã… ¡2. This makes the DCT-I matrix orthogonal, if one further multiplies by an overall scale factor of , but breaks the direct correspondence with a real-even DFT. The DCT-I is exactly equivalent, to a DFT of 2N à ¢Ã‹â€ Ã¢â‚¬â„¢ 2 real numbers with even symmetry. For example, a DCT-I of N=5 real numbers abcde is exactly equivalent to a DFT of eight real numbers abcdedcb, divided by two. Note, however, that the DCT-I is not defined for N less than 2. Thus, the DCT-I corresponds to the boundary conditions: xn is even around n=0 and even around n=N-1; similarly for Xk. DCT-II The DCT-II is probably the most commonly used form, and is often simply referred to as the DCT. This transform is exactly equivalent to a DFT of 4N real inputs of even symmetry where the even-indexed elements are zero. That is, it is half of the DFT of the 4N inputs yn, where y2n = 0, y2n + 1 = xn for , and y4N à ¢Ã‹â€ Ã¢â‚¬â„¢ n = yn for 0 Some authors further multiply the X0 term by 1/à ¢Ã‹â€ Ã… ¡2 and multiply the resulting matrix by an overall scale factor of . This makes the DCT-II matrix orthogonal, but breaks the direct correspondence with a real-even DFT of half-shifted input. The DCT-II implies the boundary conditions: xn is even around n=-1/2 and even around n=N-1/2; Xk is even around k=0 and odd around k=N. DCT-III Because it is the inverse of DCT-II (up to a scale factor, see below), this form is sometimes simply referred to as the inverse DCT (IDCT). Some authors further multiply the x0 term by à ¢Ã‹â€ Ã… ¡2 and multiply the resulting matrix by an overall scale factor of , so that the DCT-II and DCT-III are transposes of one another. This makes the DCT-III matrix orthogonal, but breaks the direct correspondence with a real-even DFT of half-shifted output. The DCT-III implies the boundary conditions: xn is even around n=0 and odd around n=N; Xk is even around k=-1/2 and even around k=N-1/2. DCT-IV The DCT-IV matrix becomes orthogonal if one further multiplies by an overall scale factor of . A variant of the DCT-IV, where data from different transforms are overlapped, is called the modified discrete cosine transform (MDCT) (Malvar, 1992). The DCT-IV implies the boundary conditions: xn is even around n=-1/2 and odd around n=N-1/2; similarly for Xk. DCT V-VIII DCT types I-IV are equivalent to real-even DFTs of even order, since the corresponding DFT is of length 2(Nà ¢Ã‹â€ Ã¢â‚¬â„¢1) (for DCT-I) or 4N (for DCT-II/III) or 8N (for DCT-VIII). In principle, there are actually four additional types of discrete cosine transform, corresponding essentially to real-even DFTs of logically odd order, which have factors of N ±Ãƒâ€šÃ‚ ½ in the denominators of the cosine arguments. Equivalently, DCTs of types I-IV imply boundaries that are even/odd around either a data point for both boundaries or halfway between two data points for both boundaries. DCTs of types V-VIII imply boundaries that even/odd around a data point for one boundary and halfway between two data points for the other boundary. However, these variants seem to be rarely used in practice. One reason, perhaps, is that FFT algorithms for odd-length DFTs are generally more complicated than FFT algorithms for even-length DFTs (e.g. the simplest radix-2 algorithms are only for even lengths), and this increased intricacy carries over to the DCTs as described below. Inverse transforms Using the normalization conventions above, the inverse of DCT-I is DCT-I multiplied by 2/(N-1). The inverse of DCT-IV is DCT-IV multiplied by 2/N. The inverse of DCT-II is DCT-III multiplied by 2/N and vice versa. Like for the DFT, the normalization factor in front of these transform definitions is merely a convention and differs between treatments. For example, some authors multiply the transforms by so that the inverse does not require any additional multiplicative factor. Combined with appropriate factors of à ¢Ã‹â€ Ã… ¡2 (see above), this can be used to make the transform matrix orthogonal. Multidimensional DCTs Multidimensional variants of the various DCT types follow straightforwardly from the one-dimensional definitions: they are simply a separable product (equivalently, a composition) of DCTs along each dimension. For example, a two-dimensional DCT-II of an image or a matrix is simply the one-dimensional DCT-II, from above, performed along the rows and then along the columns (or vice versa). That is, the 2d DCT-II is given by the formula (omitting normalization and other scale factors, as above): Two-dimensional DCT frequencies Technically, computing a two- (or multi-) dimensional DCT by sequences of one-dimensional DCTs along each dimension is known as a row-column algorithm. As with multidimensional FFT algorithms, however, there exist other methods to compute the same thing while performing the computations in a different order. The inverse of a multi-dimensional DCT is just a separable product of the inverse(s) of the corresponding one-dimensional DCT(s), e.g. the one-dimensional inverses applied along one dimension at a time in a row-column algorithm. The image to the right shows combination of horizontal and vertical frequencies for an 8 x 8 (N1 = N2 = 8) two-dimensional DCT. Each step from left to right and top to bottom is an increase in frequency by 1/2 cycle. For example, moving right one from the top-left square yields a half-cycle increase in the horizontal frequency. Another move to the right yields two half-cycles. A move down yields two half-cycles horizontally and a half-cycle vertically. The source data (88) is transformed to a linear combination of these 64 frequency squares. Chapter 4 IMPLEMENTATION AND RESULTS 4.1 Introduction: In previous chapters (chapter 2 and chapter 3), we get the theoretical knowledge about the Principal Component Analysis and Discrete Cosine Transform. In our thesis work we have seen the analysis of both transform. To execute these tasks we chosen a platform called MATLAB, stands for matrix laboratory. It is an efficient language for Digital image processing. The image processing toolbox in MATLAB is a collection of different MATAB functions that extend the capability of the MATLAB environment for the solution of digital image processing problems. [13] 4.2 Practical implementation of Performance analysis: As discussed earlier we are going to perform analysis for the two transform methods, to the images as, <

Dc Power Source Utilization Engineering Essay

Dc Power Source Utilization Engineering Essay Many industrial applications have begun to require higher power apparatus in recent years. Some medium voltage motor drives and utility applications require medium voltage and megawatt power level. For a medium voltage grid, it is troublesome to connect only one power semiconductor switch directly. As a result, a multilevel power inverter structure has been introduced as an alternative in high power and medium voltage situations. A multilevel inverter is a power electronic device built to synthesize a desired AC voltage from several levels of DC voltages. The concept of multilevel converters has been introduced since 1975. The term multilevel began with the three-level converter. Subsequently, several multilevel converter topologies have been developed. Plentiful multilevel converter topologies have been proposed during the last two decades. Contemporary research has engaged novel converter topologies and unique modulation schemes. Moreover, there are three different major multilevel converter structures which are cascaded H-bridges converter with separate dc sources, diode clamped (neutral-clamped), and flying capacitors (capacitor clamped) [1] Although the diode clamped multilevel inverter is commonly discussed in the literature, there has been considerable interest in the series connected or cascaded H-bridge inverter topologies [2]. However, the elementary concept of a multilevel converter to achieve higher power is to use a series of power semiconductor switches with several lower voltage dc sources to perform the power conversio n by synthesizing a staircase voltage waveform. Capacitors, batteries, and renewable energy voltage sources can be used as the multiple dc voltage sources [1]. Multilevel power conversion has become increasingly popular in recent years due to advantages of high power quality waveforms, low electromagnetic compatibility (EMC) concerns, low switching losses, and high-voltage capability. The primary disadvantage of multilevel power conversion technology is the large number of semiconductor devices required. This does not yield a significant cost increase since lower-voltage devices may be used. However, an increase in gate drive circuitry and more elaborate mechanical layout are required [3]. Project Overview This project will involve in the design and construction of a single phase 3-level H-bridge inverter using the IGBTs. An H-bridge is an electronic circuit which enables a voltage to be applied across a load in either direction. These circuits allow DC motors to run forwards and backwards. H-bridges are available as integrated circuits, or can be built from discrete components. In this single phase H-bridge inverter circuit, the IGBTs are used as power devices that will be operated as a switch by applying control signal to gate terminal of IGBTs. The insulated gate bipolar transistor or IGBT is a three-terminal power semiconductor device, noted for high efficiency and fast switching. The software that will be used is MATLAB Simulink. Simulink is a commercial tool for modeling, simulating and analyzing multidomain dynamic systems. Its primary interface is a graphical block diagramming tool and a customizable set of block libraries. The Aims and Objectives The aim of this project is to simulate a single phase 3-level H-bridge inverter (DC to AC converter) using the MATLAB Simulink and constructed it. The objectives of this project are as follows: To investigate the application of H-bridge inverter. To assemble using the software, circuits implementation, and troubleshoot for the hardware. To analyze the operation of the single-phase 3-level inverter for software and hardware. CHAPTER 2 LITERATURE REVIEW Inverter Power electronics converters may be classified into four categories based on the source and types of the desired output characteristics as shown in Figure 1.1 below: OUTPUT AC DC AC INPUTRECTIFIER REGULATORS DC CHOPPERS INVERTERS Figure 2.1: Converter Classification DC-to-AC converter is known as inverter. The function of an inverter is to change a DC input voltage to a symmetrical AC output voltage of desired magnitude and frequency. The variable output voltage could be fixed or variable at a fixed or variable frequency. Inverter can be built in many output phases which is normally use in practice like single phase inverter and three phase inverter. The implementation of the inverter circuit must to be involved in application of the power devices like SCR, MOSFET, IGBT, GTO, and Forced-Commutated Thyristor which is controlled to turning ON and turning-OFF in its operation as a converter. This inverter generally use PWM control signal for producing an AC output voltage [3]. Single Phase H-Bridge Inverter Operation The H-Bridge Inverter or sometimes called Full Bridge consists of four switches (see Figure 2.2). A boost converter is required as this system has no means of stepping up the input. Switches S1-S4, and S2-S3 make up two switch pairs. When S1and S4 are on, the output voltage is a positive pulse, and when S2 and S3 are on, the output is a negative pulse. The phase sequence, frequency, output magnitude and harmonics can be controlled through appropriate switching devices, in conjunction with other equipment. Figure 2.2: Single phase H-bridge inverter Single Phase Multilevel H-Bridges Inverter There are two types of multilevel H-bridge inverter that can be selected in this project which are separated dc source and single DC source. These two types have its pros and cons. The advantages of separated DC source are: The number of possible output voltage levels is more than twice the number of dc sources (m = 2s + 1). The series of H-bridges makes for modularized layout and packaging. This will enable the manufacturing process to be done more quickly and cheaply. while the disadvantage is: Separate dc sources are required for each of the H-bridges. This will limit its application to products that already have multiple SDCSs readily available. Each H-bridge cell requires an isolated dc source. The isolated sources are typically provided from a transformer/rectifier arrangement, but may be supplied from batteries, capacitors or photovoltaic arrays to add up the output voltages. This topology was patented by Robicon Group in 1996 and is one of the companies standard drive products.[2] On the other hand, for the single DC source multilevel H-bridge inverter, the advantage of this type of connection is only one DC supply is used. This will not limit its application to products. And the disadvantage of single DC source is transformer is needed to add up the output voltages Separated DC Source Multilevel H-Bridges Inverter A single-phase structure of an m-level cascaded inverter is illustrated in Figure 2.3. Each separate dc source (SDCS) is connected to a single-phase full-bridge, or H-bridge, inverter. Each inverter level can generate three different voltage outputs, +Vdc, 0, and -Vdc by connecting the dc source to the ac output by different combinations of the four switches, S1, S2, S3, and S4. To obtain +Vdc, switches S1 and S4 are turned on, whereas -Vdc can be obtained by turning on switches S2 and S3. By turning on S1 and S2 or S3 and S4, the output voltage is 0. The ac outputs of each of the different full-bridge inverter levels are connected in series such that the synthesized voltage waveform is the sum of the inverter outputs. The number of output phase voltage levels m in a cascade inverter is defined by m = 2s+1, where s is the number of separate dc sources [1]. Figure 2.3: Single-phase structure of a multilevel cascaded H-bridges inverter An example phase voltage waveform for a nine-level cascaded inverter and all H-bridge cell output waveforms are shown in Figure 2.4. In this thesis, all dc voltages are assumed to be equal. According to sinusoidal-liked waveform, each H-bridge output waveform must be quarter-symmetric as illustrated by V1 waveform in Figure 2.2. Obviously, no even harmonic components are available in such a waveform. To minimize THD, all switching angles must be numerically calculated. Figure 2.4: Waveform showing a nine-level output phase voltage and each H-bridge output voltage. One of the advantages of this structure is the number of possible output voltage levels is more than twice the number of dc sources (m = 2s + 1). The other advantage is the series of H-bridges makes for modularized layout and packaging. This will enable the manufacturing process to be done more quickly and cheaply. On the other hand, the main disadvantage of this topology is that separate dc sources are required for each of the H-bridges. This will limit its application to products that already have multiple SDCSs readily available. The sources are typically provided from a transformer/rectifier arrangement, but may be supplied from batteries, capacitors or photovoltaic arrays. Single DC source Multilevel H-Bridges Inverter Referred to Zhong Du1, Leon M. Tolbert, John N. Chiasson, and Burak ÃÆ'-zpineci thesis entitled A Cascade Multilevel Inverter Using a Single DC Source, a method is presented showing that a cascade multilevel inverter can be implemented using only a single DC power source and capacitors. Without requiring transformers, the scheme proposed allows the use of a single DC power source for examples a battery or a fuel cell stack while the remaining nà ¢Ã‹â€ Ã¢â‚¬â„¢1 DC sources being capacitors. Figure 2.5 shows the Single DC source Multilevel H-Bridges Inverter. The DC source for the first H-bridge (H1) is a DC power source with an output voltage of Vdc, while the DC source for the second H-bridge (H2) is a capacitor voltage to be held at Vdc/2. The output voltage of the first H-bridge is denoted by v1 and the output of the second H-bridge is denoted by v2 so that the output of this two DC source cascade multilevel inverter is v(t) = v1(t)+v2(t). By opening and closing the switches of H1 appropriately, the output voltage v1 can be made equal to à ¢Ã‹â€ Ã¢â‚¬â„¢Vdc, 0, or Vdc while the output voltage of H2 can be made equal to à ¢Ã‹â€ Ã¢â‚¬â„¢Vdc/2, 0, or Vdc/2 by opening and closing its switches appropriately. Figure 2.5: Single DC source Multilevel H-Bridges Inverter IGBTs Versus MOSFETs The power MOSFET is a device that is voltage- and not current-controlled. MOSFETs have a positive temperature coefficient, stopping thermal runaway. The on-state-resistance has no theoretical limit, hence on-state losses can be far lower. The MOSFET also has a body-drain diode, which is particularly useful in dealing with limited free wheeling currents. All these advantages and the comparative elimination of the current tail soon meant that the MOSFET became the device of choice for power switch designs. Then in the 1980s the IGBT came along. The IGBT combines the cross between the power MOSFET and a bipolar power transistor (see Figure 2.2). The IGBT has the output switching and conduction characteristics of a bipolar transistor but is voltage-controlled like a MOSFET. In general, this means it has the advantages of high-current handling capability of a bipolar with the ease of control of a MOSFET. However, the IGBT still has the disadvantages of a comparatively large current tail and no body drain diode. Early versions of the IGBT are also prone to latch up, but nowadays, this is pretty well eliminated. Another potential problem with some IGBT types is the negative temperature co-efficient, which could lead to thermal runaway and makes the paralleling of devices hard to effectively achieve. This problem is now being addressed in the latest generations of IGBTs that are based on non-punch through (NPT) technology. This technology has the same basic IGBT structure (see Figure 2.6) bu t is based on bulk-diffused silicon, rather than the epitaxial material that both IGBTs and MOSFETs have historically used [4]. Figure 2.6: NPT IGBT cross section The comparisons between MOSFETs and IGBTs are as below: Table 2.1: Comparisons between IGBTs and MOSFETs IGBTs MOSFETs Characteristics Low duty cycle Low frequency ( Narrow or small line or load variations High-voltage applications (>1000V) >5kW output power Operation at high junction temperature is allowed (>100 °C) Long duty cycles High frequency applications (>200kHz) Wide line or load variations Low-voltage applications ( Applications Motor control: Frequency Uninterruptible power supply (UPS): Constant load, typically low frequency Welding: High average current, low frequency ( Low-power lighting: Low frequency ( Switch mode power supplies (SMPS): Hard switching above 200kHz Switch mode power supplies (SMPS): ZVS below 1000 watts Battery charging [4] Applications of Inverters There are many application of inverter available today. Some of the applications are as follows: DC power source utilization An inverter converts the DC electricity from sources such as batteries, solar panels, or fuel cells to AC electricity. The electricity can be at any required voltage; in particular it can operate AC equipment designed for mains operation, or rectified to produce DC at any desired voltage. Grid tie inverters can feed energy back into the distribution network because they produce alternating current with the same wave shape and frequency as supplied by the distribution system. They can also switch off automatically in the event of a blackout. Micro-inverters convert direct current from individual solar panels into alternating current for the electric grid. Electric vehicle drives Adjustable speed motor control inverters are currently used to power the traction motor in some electric locomotives and diesel-electric locomotives as well as some battery electric vehicles and hybrid electric highway vehicles such as the Toyota Prius. Various improvements in inverter technology are being developed specifically for electric vehicle applications. In vehicles with regenerative braking, the inverter also takes power from the motor (now acting as a generator) and stores it in the batteries. Uninterruptible power supplies An uninterruptible power supply (UPS) uses batteries and an inverter to supply AC power when main power is not available. When main power is restored, a rectifier is used to supply DC power to recharge the batteries. Variable-frequency drives A variable-frequency drive controls the operating speed of an AC motor by controlling the frequency and voltage of the power supplied to the motor. An inverter provides the controlled power. In most cases, the variable-frequency drive includes a rectifier so that DC power for the inverter can be provided from main AC power. Since an inverter is the key component, variable-frequency drives are sometimes called inverter drives or just inverters. Induction heating Inverters convert low frequency main AC power to a higher frequency for use in induction heating. To do this, AC power is first rectified to provide DC power. The inverter then changes the DC power to high frequency AC power. CHAPTER 3 METHODOLOGY Introduction This chapter exposes the proposed method of this project to built single phase multilevel H-bridge inverter. This project can be divided into two main parts of study which are software and hardware implementation. For the software part, the software used is PIC24 Compiler that used to do the programming for the microcontroller part and MATLAB to do the simulation of the inverter circuit before implemented it in hardware. In addition, Proteus 7 Professional is also used to simulate the driver circuit before do the hardware. The summary of the project is shown in Figure 3.1. Software Part Prepared (Microcontroller) Hardware Part Prepared Troubleshooting Interfacing Result Figure 3.1: The project summary Design of the H-Bridge Inverter System The H-Bridge inverter system can be divided into three main stages that were constructed. It is consists of: Microcontroller Power electronics driver Power electronics inverter Each part was treated as a separate functional block system. Figure 3.2 below shows the block diagram of how each stage of the inverter system are organized. Power electronic driver circuit and microcontroller stage is the low voltage side and power electronics inverter circuit is the high voltage side. DC Voltage Input AC Output Power Electronics Inverter Circuit Microcontroller Power Electronic Driver Circuit Figure 3.2: The block diagram of the inverter system Microcontroller Microcontroller is a computer-on-a-chip optimised to control electronic devices. The microcontroller chip used for this project is PIC16F877A. In this project, microcontroller is used to develop the triggering signal for the IGBTs and interfacing to the single phase inverter circuit as a control signal for the gate driver. To implement the microcontroller part, the program for triggering the IGBTs was written in assembly language using the PIC C Compiler. It is written in the text editor or notepad called as source code. It also can be written directly in the PIC C Compiler. Then the file saved is *file.c file. After the program is successfully compiled, the *file.hex file was generated. The hex file was tested by doing the simulation in the Proteus 7 Professional to see the output generated from the program. After got the correct output, the *file.hex file then was uploaded in the PIC16F877A using the PIC programmer. The process of implementing the microcontroller is shown in Figure 3.3. This microcontroller part is the first part that was implemented in hardware. Figure 3.3: The process of implementing the microcontroller Power Electronics Driver A driver is an electronic component used to control another circuit or other component, such as a high-power transistor. Unlike the bipolar transistor, which is current driven, IGBTs, with their insulated gates, are voltage driven. It is allows user to speed up or slow down the switching speeds according to the requirements of the application. The control circuitry supplied low current driving signals that are referenced to controller-ground. A logic one signal was applied to its gate with respect to its source to turn on an IGBT switch, and this signal needs to restrain sufficient power. These requirements can not be met by the control circuit. Figure 3.5 shows a diagram of how signals need to be applied to IGBT switches for effective operation. Figure 3.4: Control signals need to be applied to the gate with respect to the source The driver chose is IR2110 which is a dual driver. The IR2110 High Voltage Bridge Driver is a power integrated circuit that is designed to drive two insulated gate devices. The typical connection of the driver is shown in Figure 3.5. The two channels of the IR2110 are completely independent of one another. The HO output is controlled by the HIN input, and the LO output is controlled by the LIN input. The two inputs of the IR2110 are logically coupled to the shutdown (SD) pin through an AND gate. If HIN and LIN both go high, then the IR2110 will be shut down until one or both inputs go low. This measure helps prevent the catastrophic situation where both Q1 and Q2 turn on at the same time and short circuit the input source. [5] Figure 3.5: Typical connection of IR2110 High Voltage Bridge Driver Isolation using the optocoupler An optocoupler or sometimes refer to as optoisolator allows two circuits to exchange signals yet remain electrically isolated. This is usually accomplished by using light to relay the signal. The standard optocoupler circuits design uses a LED shining on a phototransistor. The signal is applied to the LED, which then shines on the transistor in the ic.   The optocoupler circuit is shown in Figure 3.6 below. In this project, the optocoupler is used as the source and destination are at very different voltage levels, where the source is the microprocessor which is operating from 5V DC but it being used to control the IGBTs which is switching at higher voltage. In such situations the link between the two must be an isolated one, to protect the microprocessor from overvoltage damage. The optocouplers can be used with following advantages for driving high side IGBT in any topology: They can be used to give a very high isolation voltage Signals from DC to several MHz can be handled by opto-couplers. They can be easily interfaced to Microcomputers or other controller ICs or any PWM IC. Figure 3.6: Optocoupler circuit The circuit of low side voltage which consists of PIC, driver and optocoupler was first constructed in the Proteus 7 Professional to see the output generated to be compared with the hardware results. The circuit is as in Figure 3.7 below. Figure 3.7: Low side voltage simulation Power Electronic Inverter The power electronics inverter part is the main part of the system. This is because this circuit will perform the conversion from DC to AC. The circuit consists of four IGBT that act as a switch, DC source and also the load. Figure 3.8 shows a diagram of the H-Bridge power electronics inverter stage. Figure 3.8: H-Bridge power electronics inverter stage. But for this project, the inverter circuit used is the 3-level H-bridge inverter circuit. The circuit was first constructed in the MATLAB as in Figure 3.9 and the simulation of the circuit was done to see the result of simulation. Figure 3.9: 3-level H-bridge inverter circuit constructed in MATLAB The block parameter for the IGBTs was set as in Figure 3.10. The switching frequency used for this circuit is 50Hz. So, the period of waveform can be calculated as below: Switching frequency, f = N / Pf Fundamental period, Pf = 1 / f fundamental = 1/ 50 = 0.02s Figue 3.10: The block parameter setting for the IGBTs In addition, the phase delay or switching times of the IGBTs were also set. Table 3.1 below shows the switching time of the IGBTs. After the simulation was success, the circuit of single 3-level H-bridge inverter was constructed. Table 3.1: The switching time of the IGBTs IGBTs Switching Time IGBT 1 and IGBT 3 10 ms (à Ã¢â€š ¬) IGBT 2 and IGBT 4 0 ms (0 à Ã¢â€š ¬) IGBT 5 and IGBT 7 7 ms (à Ã¢â€š ¬/7) IGBT 6 and IGBT 8 3 ms (à Ã¢â€š ¬/3) For the switch, IRGB10B60KDPBF IGBT was selected for this design. It is very important to choose the correct switches for the inverter circuit because the performance of the design is directly depends on this. This IGBT was chosen because it has ultra fast recovery diode along, it offered benchmark efficiency for motor control and excellent current sharing in parallel operation. In addition the IGBT was selected as they are able withstand the power rating of the inverter. Table 3.2 shows some of the features of the selected IGBT. Table 3.2: The features of IRGB10B60KDPBF IGBT Characteristics Value Drain to Source Voltage (Vds) 600V Drain Current (Id) 12A Rise Time 20ns Fall Time 23ns Short Circuit Capability 10ÃŽÂ ¼s Figure 3.4: 3-level H-bridge inverter circuit

Saturday, July 20, 2019

Internet Pro Or Con :: essays research papers

Worldwide Disaster: Right at Your Fingertips   Ã‚  Ã‚  Ã‚  Ã‚  Internet junkies and world leaders alike are dealing with a phenomenon they do not fully understand; the internet, a vast, ungovernable, intimate, alter-reality, through which, almost anything is possible. Although many acclaim the internet as a harbinger to a new age and extol its virtues as an information source, the internet brings challenges few are ready to face. The versatility of the internet brings these troubles into many realms of our everyday life. This paper will discuss how the internet hurts commerce, international relations, and interpersonal relationships.   Ã‚  Ã‚  Ã‚  Ã‚  The commercial industries have latched onto the internet as if it were free money. Many, though have been caught unaware. Commerce suffers greatly from information leaks and infringement. One of the largest losses come from the loss of trade secrets. Joseph Kizza, an expert researcher in the field of internet influence, states the problem succinctly:   Ã‚  Ã‚  Ã‚  Ã‚  Two types of information can leak on the internet: (1) information on devices,   Ã‚  Ã‚  Ã‚  Ã‚  designs, processes, software designs, and many other industrial processes, and (2)   Ã‚  Ã‚  Ã‚  Ã‚  information on individual employees’ life possessions-- employee- accumulated   Ã‚  Ã‚  Ã‚  Ã‚     Ã‚  Ã‚  Ã‚  Ã‚  knowledge and experience...When an employee is hired by a company he/she   Ã‚  Ã‚  Ã‚  Ã‚  usually signs a contract with a new employer against disclosure of information   Ã‚  Ã‚  Ã‚  Ã‚  Ã¢â‚¬Å"acquired in the course of employment.† But by the nature of the internet an   Ã‚  Ã‚  Ã‚  Ã‚  employee can live by this contract and yet disclose as much information, most   Ã‚  Ã‚  Ã‚  Ã‚  times unknowingly, into the internet community. (147) Such information leaks can do great damage to individual companies in a competitive environment. Years of research and millions of dollars can be leaked out unwittingly. Infringement uses these trade secrets for gain. An infringer is anyone who uses proprietary information to profit undeservedly. But, unlike other lawbreakers no public law enforcement can be used to investigate an infringer (Kizza 78). The owner of patents or copyrights must pay any expenses incurred for investigating and prosecuting. Considering the inability to trace internet access in such a case few infringements are ever caught. This can be devastating to commerce (Kizza 78).   Ã‚  Ã‚  Ã‚  Ã‚   Concerning international relations the internet has already done much damage. The British Broadcasting Company, ran a program in 1995 explaining how before any real bombing began in the Gulf War, the US government used internet warfare to drop the â€Å"I- Bomb† on Saddam Hussein’s information systems (Bourdieu 57). The program intimated that the damage done in such warfare is more devastating than the physical damage done by the bombing. Internet Pro Or Con :: essays research papers Worldwide Disaster: Right at Your Fingertips   Ã‚  Ã‚  Ã‚  Ã‚  Internet junkies and world leaders alike are dealing with a phenomenon they do not fully understand; the internet, a vast, ungovernable, intimate, alter-reality, through which, almost anything is possible. Although many acclaim the internet as a harbinger to a new age and extol its virtues as an information source, the internet brings challenges few are ready to face. The versatility of the internet brings these troubles into many realms of our everyday life. This paper will discuss how the internet hurts commerce, international relations, and interpersonal relationships.   Ã‚  Ã‚  Ã‚  Ã‚  The commercial industries have latched onto the internet as if it were free money. Many, though have been caught unaware. Commerce suffers greatly from information leaks and infringement. One of the largest losses come from the loss of trade secrets. Joseph Kizza, an expert researcher in the field of internet influence, states the problem succinctly:   Ã‚  Ã‚  Ã‚  Ã‚  Two types of information can leak on the internet: (1) information on devices,   Ã‚  Ã‚  Ã‚  Ã‚  designs, processes, software designs, and many other industrial processes, and (2)   Ã‚  Ã‚  Ã‚  Ã‚  information on individual employees’ life possessions-- employee- accumulated   Ã‚  Ã‚  Ã‚  Ã‚     Ã‚  Ã‚  Ã‚  Ã‚  knowledge and experience...When an employee is hired by a company he/she   Ã‚  Ã‚  Ã‚  Ã‚  usually signs a contract with a new employer against disclosure of information   Ã‚  Ã‚  Ã‚  Ã‚  Ã¢â‚¬Å"acquired in the course of employment.† But by the nature of the internet an   Ã‚  Ã‚  Ã‚  Ã‚  employee can live by this contract and yet disclose as much information, most   Ã‚  Ã‚  Ã‚  Ã‚  times unknowingly, into the internet community. (147) Such information leaks can do great damage to individual companies in a competitive environment. Years of research and millions of dollars can be leaked out unwittingly. Infringement uses these trade secrets for gain. An infringer is anyone who uses proprietary information to profit undeservedly. But, unlike other lawbreakers no public law enforcement can be used to investigate an infringer (Kizza 78). The owner of patents or copyrights must pay any expenses incurred for investigating and prosecuting. Considering the inability to trace internet access in such a case few infringements are ever caught. This can be devastating to commerce (Kizza 78).   Ã‚  Ã‚  Ã‚  Ã‚   Concerning international relations the internet has already done much damage. The British Broadcasting Company, ran a program in 1995 explaining how before any real bombing began in the Gulf War, the US government used internet warfare to drop the â€Å"I- Bomb† on Saddam Hussein’s information systems (Bourdieu 57). The program intimated that the damage done in such warfare is more devastating than the physical damage done by the bombing.

Friday, July 19, 2019

Biography Of Genghis Khan :: Biography Genghis Khan Bio Bios Essays

Biography of Genghis Khan The old world had many great leaders. Alexander the Great, Hannibal and even Julius Caesar met with struggle on their rise to power. Perhaps Genghis Khan was the most significant of all these rulers. To prove that Genghis Khan was the greatest ruler, we must go back to the very beginning of his existence. We must examine such issues as; Genghis ¹s struggle for power/how his life as a child would affect his rule, his personal and military achievements and his conquests. Genghis Khan was originally born as Temujin in 1167. He showed early promise as a leader and a fighter. By 1206, an assembly of Mongolian chieftains proclaimed him Genghis Khan. Which meant Universal or invincible prince. This was a bold move for the assembly. They obviously saw some leadership qualities in Genghis that others didn ¹t. When Genghis Khan was little, his chieftain father poisoned. With no leader left, the tribe abandoned Genghis and his mother. They were left alone for many years to care for themselves. Throughout these years, his family met many hardships such as shortage of food and shortage of money. Though unable to read, Genghis was a very wise man. His mother told him at a very early age the importance of trust and independence. "Remember, you have no companions but your shadow" Grolier Encyclopedia. (1995) CD ROM This quote was to mean to Genghis, don ¹t put to much trust in anyone, trust no one but yourself and if you must go your own way then do so. In 1206, Genghis Khan proclaimed the ruler of Mongolia. Genghis was a very respected leader. Like other leaders he knew what his people wanted. They want everything that is good and nothing that is bad. Genghis knew he could not promise this so instead he pledged to share both the sweet and the bitter of life. Genghis did not want to end up being poisoned like his father so instead he made alliances, and attacked anyone who posed a serious threat. Through this method of leadership, Genghis ¹s army grew to the point where they were unbeatable. Genghis contributed alot of items to the chinese and even western civilizations. Perhaps his greatest contribution was a code of laws that he declared. Since Genghis couldn ¹t read or write, these law were documented by one of his followers. His laws were carried on by people though the many generations to the point of still being in use