Wednesday, January 16, 2019

morphemes/Language and Communication


Language and Communication

Morpheme is the minimal meaningful grammatical unit of a language that cannot be further divided (e.g. in, come, -ing, forming incoming).
Morphology is the study of word formation that how new words are createdand get different shapes. It also gives a different meaning according to the context. In fact, context indicates the context which helps us to draw the meaning.
e.g ‘Hamza is assigned to manage the media group as the president.’ ‘Hamza successfully trained the management of the media group.’ Hamza`s first session was about, ‘Reporting professionally is easily manageable.’
In the above examples we see that the shapes of the words change according to the context and it also changes the meaning.
Types of Morphemes
Morpheme can be classified as either free or bound.
Free Morphemes that can occur on their own without any morphemes necessarily attached to them. As such, free morphemes can stand by themselves as single, thoroughly independent words, e.g. manage as in management, mother as in motherhood or words such as pen, tea,and man.
Bound Morphemesare bound in the very sense that they cannot stand alone and are thus necessarily attached to another form. For instance, “-ment” as in management or "un" as in unhappy'' are bound morphemes.
Morphological Processes
Ø  Derivational Morphemes is a process to create new words. I can also change the grammatical position of a word.

Process of Affixation:
Affix means to attach other words with the base word. Process of affixation helps us to form new words. Affixation can take place in these three ways to form new words:
Ø  Prefixes: are used in the beginning of a base word. e.g “uninuneducated.
Ø  Infixes: are used in the middle of a base word. e.g “in” in Mother-in-law.
Ø  Suffixes: are attached in the end of the base word. e.g “tion” in education.







Properties of Language


Human language differ from animal languages?
There is a sheer distinction between the Human and Animal language.
Human languages differ from animal languages in many ways. Some of the major features of human languages are 1) displacement, 2) arbitrariness, 3) productivity, 4) cultural transmission, 5) discreteness, and 6) duality. Animal languages do not possess these features:

Displacement
A major difference between animal language and human language is the displacement feature of human language. It means that human language can overcome the limitations of time and space. Animal communication is designed for here and now. But, human language can relate to events removed in time and space.

Arbitrariness
A major difference between animal language and human language is the arbitrariness of human language. It means that human linguistic signs do not have any natural connection between its form and meaning. The only exceptions are the onomatopoeic sounds. In the animal communication, the signs they use are synonymous with meaning.

Productivity
A major difference between animal language and human language is the productivity of human language. This refers to the human ability to combine limited linguistic signs to produce new sentences and expressions. Animals are incapable of this as animal signals have fixed reference.

Cultural Transmission
A major difference between animal language and human language is the cultural transmission of human language. While animals get their language genetically, human beings acquire language. Human languages are passed down by the society in which one lives and grows up.

Discreteness
A major difference between animal language and human language is the discreteness of human language. This refers to the uniqueness of the sounds used in human languages. Every language uses a set of different sounds. Each of these sounds is different from the rest and are combined to form new meanings. A sound can be repeated, or combined with another to form a new meaning. But, animal languages do not have this feature of discreteness.

Duality
One major difference between animal language and human language is the duality of human language. This is not found in animal languages. Human language can be both spoken and written. Even the languages that do not have alphabet can be written down using some symbols. Animal languages are only spoken.




Vowel Sounds

Figure

Vowel sounds are typically voiced sounds. Pronunciation changes because of vowel sounds. These sounds are produced with free flow of air. Flow of air is very clear and no obstruction is felt while producing these sounds. To describe vowel sounds we describe the way in which tongue influence the shape through which the airflow must pass. To talk about the place of articulation we think of the space inside the mouth as having front vs back area. Thus in pronunciation of hit and heat we raise the tongue towards upper side. In contrast the vowel sound in hat and hotis produced with the tongue in lower position.Vowels with two dots are long vowels while the vowels having no dots are short vowels. We have four long vowels and seven short vowels.
Keeping in view the above figure we have three categories of vowels:
Ø  Front Vowels:they are called front vowels because front part of the tongue is used while producing these sounds. Front vowels are unrounded. We have four front vowels and the second is called centralised because it is near to central.
Ø  Back Vowels:they are called back vowels because back part of the tongue is used while producing these sounds. Back vowels are rounded. We have four back vowels and two are long. The second one is called centralised because it is near to Central vowels.
Ø  Central Vowels:they are called central vowels because central part of the tongue is used while producing these sounds. Central vowels are also unrounded.They are three in number.























Human Signed Language

Signed Language: Sign language refers to a mode of communication, distinct from spoken languages, which uses visual gestures with the hands accompanied by body language to express meaning.
Signed languages, like spoken languages, are highly structured linguistic system; they have their own sets of phonological, morphological and syntactic characteristics. Despite complex differences between spoken and signed languages, the associated brain areas are thus far thought to share a lot in common.
It has been determined that the brain's left side is the dominant side utilized for producing and understanding sign language, just as it is for speech. In 1861, Paul Broca studied patients with the ability to understand spoken languages but the inability to produce them. The damaged area was named Broca's area, and located in the left hemisphere. Soon after, in 1874, Carl Wernicke studied patients with the reverse deficits: patients could produce spoken language, but could not comprehend it. The damaged area was named Wernicke's area, and is located in the left hemisphere. Signers with damage in Broca's area, have problems producing signs. Those with damage in the Wernicke's area (left hemisphere) in the temporal lobe of the brain have problems comprehending signed languages. Early on, it was noted that Broca’s area was near the part of the motor cortex controlling the face and mouth. Likewise, Wernicke's area was near the auditory cortex. These motor and auditory areas are important in spoken language processing and production, but the connection to signed languages had yet to be uncovered. For this reason, the left hemisphere was described as the verbal hemisphere, with the right hemisphere deemed to be responsible for spatial tasks. This criteria and classification was used to denounce signed languages as equal with their spoken counterparts before it was more widely agreed upon that due to the similarities in cortical connectivity they are linguistically and cognitively equivalent. In the 1980's research on deaf patients with left hemisphere stroke were examined to explore the brains connection with signed languages. The left perisylvian region was discovered to be functionally critical for language, spoken and signed. Its location near several key auditory processing regions led to the belief that language processing required auditory input and was used to discredit signed languages as "real languages." This research opened the doorway for linguistic analysis and further research of signed languages.

1. Different countries have different sign languages.
This is the sign for the word "math" in two different sign languages—American Sign Language on the left, and Japanese Sign Language on the right. Why should there be more than one sign language? Doesn't that just complicate things? This question would make sense if sign language was a system invented and then handed over to the deaf community as an assistive device. But sign languages, like spoken languages, developed naturally out of groups of people interacting with each other. We know this because we have observed it happen in real time.

2. Given a few generations, improvised gestures can evolve into a full language.
In 1980, the first Nicaraguan school for the deaf opened. Students who had been previously isolated from other deaf people brought the gestures they used at home, and created a sort of pidgin sign with each other. It worked for communication, but it wasn't consistent or rule-governed. The next generation who came into the school learned the pidgin sign and spontaneously started to regularize it, creating rules for verb agreement and other consistent grammatical devices.

3. Sign language does not represent spoken language.
Because sign languages develop within deaf communities, they can be independent of the surrounding spoken language. American Sign Language (ASL) is quite different from British Sign Language (BSL), despite the fact that English is the spoken language of both countries. The above picture shows the sign WHERE in BSL (on the left) and ASL (on the right).
That said, there is a lot of contact between sign language and spoken language (deaf people read and write or lip read in the surrounding language), and sign languages reflect this. English can be represented through fingerspelling or artificial systems like Signed Exact English or Cued Speech. But these are codes for spoken or written language, not languages themselves.

4. Sign languages have their own grammar.
There are rules for well-created sentences in sign language. For example, sign language uses the space in front of the signer to show who did what to whom by pointing. However, some verbs point to both the subject and object of the verb, some point only to the object, and some don't point at all. Another rule is that a well-created question must have the right kind of eyebrow position. Eyebrows should be down for a who-what-where-when-why question (see ASL WHERE picture above), and up for a yes/no question. If you use the rules wrong, or inconsistently, you will have a "foreign" accent!

5. Children acquire sign language in the same way they acquire spoken language
The stages of sign language acquisition are the same as those for spoken language. Babies start by "babbling" with their hands. When they first start producing words, they substitute easier handshapes for more difficult ones, making for cute "baby pronunciations." They start making sentences by stringing signs together and only later get control of all the grammatical rules. Most importantly, as seen in the above video, they learn through natural interaction with the people around them.
6. Brain damage affects sign language in the same way it affects spoken language.
When fluent signers have a stroke or brain injury, they may lose the ability to sign, but not to make imitative or non-sign gestures. They may be able to produce signs, but not put them in the correct grammatical configurations. They may be able to produce sentences, but with the signs created incorrectly, creating a strange accent. They may be able to sign quickly and easily, but without making any sense. We know from studying speaking people that "making sounds" is quite different from "using language" because these functions are affected differently by brain damage. The same is true for signers. Neurologically, making gestures is quite different from using sign language.

7. Sign language is a visual language.
This one is pretty obvious, but it's important to mention. Sign language is just like spoken language in many ways, but it's also different. Sign can be very straightforward and formal, but it can also take full advantage of its visual nature for expressive or artistic effect, as shown in the story in this video. Which, when you think about it, doesn't make sign language all that different after all. For expressive purposes, we can take full advantage of spoken language's auditory nature. We can also take advantage of facial expressions and gestures when we speak. Everything that would be in an artistic spoken performance—the words, the ordering of clauses, the pauses, the breath intake, the intonation and melody, the stressing or deemphasizing of sounds, the facial and vocal emotion, the body posture and head and hand gestures—come through together in sign language. It looks amazing not because it shows us what sign language can do, but because it shows us what language does.

No comments:

Post a Comment

What's the difference between authoritarian theory and Soviet communist theory of mass communication as mentioned in Four Theories of Press?

The authoritarian theory and the Soviet communist theory of mass communication are two of the four normative theories of the press proposed ...