Previous Studies On Memory For Discourse

downloadDownload
  • Words 2674
  • Pages 6
Download PDF

Surface Structure

According to an often-cited study, Jarvella (1971) observed that verbatim memory for spoken discourse decreased extraordinarily at sentence and clause boundaries. In this investigation, participants listened to pairs of sentences and were intermittently interrupted and inquired to recall a part of the previous item. When they remembered the part which was in the current sentence, what they remembered and uttered was significantly better than when the same words were from a preceding sentence, even while the following position was accurately controlled. What Goldman et al. (1980) observed through investigating children’s reading performance was the same finding: verbatim recalled was significantly superior at the time that the things to be considered were from the sentence concurrently being read rather than the preceding sentence, even when some intervening words were similar. Such findings are commonly considered as proposing that surface structure is kept up while a sentence is being processed in mind (probably to support syntactic processing and interpretation); however, it is relatively lost rapidly after the sentence has been realized and the information which is added to the text base.

Text Base (Semantic Content)

The second memory of discourse which is shaped by readers of text is a representation of the semantic content or meaning of the content. It is obvious that this form of representation is different from a representation of the words of the text since, for illustration, perusing time accelerates with the frequency of propositions in a text, even when the number of words is similar (Kintsch & Keenan, 1973). Researchers have observed that although memory for this perspective of the text keeps going longer than memory for surface structure, it is far from veridical even after a simple delay. As an illustration, Kintsch et al. (1990) calculated estimations of memory that were related to accuracies of 64% after 40 m and 60% after two days. Also, since what a reader recalls relies on the representation made by them and not the real words of the text, it is conceivable for them to think they recall sentences that are not really in the text but conceivably could have been. For illustration, Bower et al. (1979) inquired subjects to read descriptions of ordinary tasks (such as eating in a restaurant or going to a dental practitioner) and found that normal activities were regularly recalled even if they were in the content. Hence, memory based on the content base includes reconstruction and is subject to similar distortions and biases generally documented for memory.

Click to get a unique essay

Our writers can write you a new plagiarism-free essay on any topic

Situation Model

The last level of memory representation is a ‘situation model’ that illustrates the situation anticipated by the content. This model contains data about the entities, such as, i.e., people, places, and things indicated by the text and the connections among those elements. Occasionally, in building a situation model, readers will make inferences related to the appearance arrangement, and other properties concerning those entities regardless of whether such information is not shown in the content. A considerable bulk of research shows that this model is built in addition to the text base. For instance, readers can make inductions concerning spatial relations portrayed by the text that would be difficult or slow utilizing just the information in the text from the propositional point of view.

In almost the same way, under a few conditions, readers tend to track the spatial dimension of the protagonist in the story world (Morrow et al., 1987). Research has discovered that readers have a better memory for this model than for the text base or surface structure. Undoubtedly, following a day or more, readers are probably going to have the capacity to recall just information in the situational model (e.g., Kintsch et al. 1990). Fundamentally, circumstance models are influenced by the desires and information of the readers. Over four decades later, a similar pattern of results utilizing distinctive materials and more present-day technique was replicated by Kintsch and Greene (1978). Both of the examinations show the focal part of reconstruction in the narrative recall.

Bransford and Johnson (1972) found that readers of text no matter it is a writing topic or a long text likewise depend on and apply world information as a powerful influence for the comprehension of content. In an important examination, their participants were given a short text in which a notable assignment was depicted.

Validity of Syntactic Complexity Analyzers (Lu, 2012)

Syntactic Complexity Analyzers (Lu, 2012), as a computational system, by taking advantage of the fourteen syntactic complexity measures, computes the syntactic complexity of English language samples through deep syntactic parsing. In this system, a written English language sample in plain text format is taken as an input, and consequently, it outputs fourteen indices of syntactic complexity of the sample based on the fourteen measures.

Nasseri (2015) investigated the dissertation abstracts selected from 150 graduate students in Applied Linguistics and any other English as a foreign language-related discipline written by EFL, ESL, and NS students regarding syntactic complexity. The EFL students were all Iranian master’s students with varying L1s who studied in various universities in Iran. The ESL and NS students are all master’s students who studied and submitted their dissertations in various universities in the UK; the ESL students were from different nationalities and language backgrounds. The participants were selected from a homogeneous age group (20-40 years old) of female and male students; a total of 150 abstracts, fifty abstracts from each group. The corpus was analyzed using the Syntactic Complexity Analyzer (L2SCA), a computational system for the automatic analysis of syntactic complexity developed and reliability-tested by Lu (2012). The findings revealed that the EFL group produced significantly shorter sentences (MLS) and shorter T-units (MLT) than the NS group, supporting the results of Ai and Lu (2013) and Tavakoli and Foster (2008).

Method

Participants

Participants of the current study were ten male and female (5 males, five females) Iranian IELTS candidates randomly chosen from 30, who were getting prepared for IELTS examination in an English institute in Shiraz and were to take IELTS examination. They were between 18 and 30 years old and chosen from modest and competent users of English. Participants` background knowledge of writing had been checked through the institute placement test. The sampling technique aimed to have writers with a good range of L2 proficiency levels of IELTS writing tasks to obtain samples of writers with equivalent language proficiency. In addition, there were two qualified and experienced IELTS teachers as assessors (coders) of the candidates` writing papers.

Instrumentation

A total of 10 writing topics, randomly selected from 70 Task 2 (essays) of Cambridge’s passed administered IELTS writing tests (every 7th), were used and given to each IELTS candidate to answer research questions (see appendix 4). The writing test included argumentative essays. There was a minimum requirement for word number (at least 250 words).

In the directions for the writing tasks, the students were instructed to write a minimum of 250 words in 40 minutes (IELTS standard time allotted criterion). In the independent writing task, candidates were supposed to observe the standard criteria from which their essays were to be rated: idea development and support about the prompt and the task, organization, and flow of ideas, and language use (in syntax, lexis, etc.). In order to investigate the lexical complexity issue, the researcher took advantage of the Lexical Complexity Analyzer (Lu, 2012) in which all essays` introductions or conclusions, and 10 IELTS writing topics were analyzed concerning all three sub-constructs of lexical complexity: lexical diversity, lexical density, and lexical sophistication. Also, a codification scheme adapted from Yang, Lu, and Weigle (2002) was used to explore the syntactic complexity of the IELTS candidates` essays` introductions or conclusions as well as IELTS writing topics through a computational software–L2 Syntactic Complexity Analyzer (L2SCA) (Lu, 2010).

Data Collection Procedures

First, with the help of a homogeneity test (institute placement test which is a mock IELTS test), 30 IELTS candidates were chosen. The classification criterion was based on IELTS band scores ranging between 5 and 6 (standard IELTS cut of scores for modest and competent users of English). Since a serious problem in a study, particularly in qualitative studies, can be too much data usually consisting of a mixture of field notes and processing sizeable and heterogeneous datasets can involve a lot of work, ten participants whose scores fell between 5 and 6 out of the maximum IELTS band score 9 were randomly selected for being tested out of 10 IELTS writing topics. That is to say, in the present study, processing too much data usually including transcripts of various recordings as well as documents of a diverse nature and length needs a lot of work, it was practically impossible to assess a lot of students` essays in order to identify their writing features. Besides, in this study, the researcher would reach saturation with a few candidates’ writing papers.

Second, in order to reduce the learning effect in writing skill, all candidates’ writing papers were collected in five weeks. It was also worth mentioning that all the participants were informed of the fact that all 10 IELTS writing topics given to each candidate, at regular intervals of two writing topics per week, were administered as mock IELTS writing examinations and they were not aware of the purpose of this study so as to control the halo effect in testing.

Having collected writing papers, since the main purpose of this study was to explore the impacts of IELTS writing prompts` linguistic complexity on both candidates` understanding of lexical and syntactic complexities, illustrated in their writing papers, and the type of memory representation applied in their writing tasks, merely all writing papers` introductions or conclusions were analyzed to investigate the candidates` command of writing overviews. An overview in the academic IELTS writing task 2, either in the introduction or in conclusion, or both, is a purpose statement which is a declarative sentence summarizing the specific topic and goals of a document based on what has been perceived out of the writing topic. In other words, exploring candidates` writing overviews was the main source to analyze the linguistic complexity of writing papers indicating what an IELTS candidate had grasped from the writing topics. The process of analyzing candidates` writing papers was done by two experienced assessors based on IELTS standard criteria.

Principally, the overview of an IELTS writing paper is typically included in the introduction in order to provide a precise, concrete understanding of what the essay will cover to the reader and what the reader can perceive from reading it. It is worth mentioning that because in this study the base of data analysis was the frequencies of both lexical and syntactic complexities, in essays that the candidates had illustrated the overviews in both introductions and conclusions, only the overviews of introductions would be observed. This would enable the researcher to hinder the additional features of linguistic complexity which might be included in conclusions influencing the results of the current study.

Operationalization of Variables

For the operationalization of both linguistic complexity including syntactic and lexical complexities and the type(s) of writing discourse memory representation, the current study examined the following measures.

Lexical Complexity Measures

Lexical complexity is measured with the help of the sub-constructs of lexical diversity, lexical sophistication, and lexical density. In this study, all sub-constructs were taken into consideration in that the proportion of both 10 IELTS writing topics` lexical complexity and ten introductions for each candidate reflecting their understanding of writing topics` lexical complexity were used as the measure. Lexical complexity concept is based on Laufer and Nation (1995) and Read (2000) in which words beyond the most frequent 2,000 words in English are classified as more advanced, sophisticated and lower frequency words. The sophisticated words include most academic vocabulary, domain-specific words, as well as other less frequently used words. Also, lexical diversity is measured by analyzing the number of different words used concerning the total number of words in a text. Based on Laufer and Nation`s categorization, lexical density encompasses the proportion of lexical words in a text to that of grammatical words. To obtain the indices for the proportion of these three variables in the introductions and the conclusions of candidates` writing papers as well as IELTS writing topics, the Lexical Complexity Analyzer (Lu, 2012) was used.

Syntactic Complexity Measures

Generally, eight different measures are used for syntactic complexity (SC), representing different dimensions of the multi-dimensional construct (Norris & Ortega, 2009). The eight measures include global SC measures–mean length of sentence (MLS), mean length of T-unit (MLTU), clausal coordination measure–T units per sentence (TU/S), measure tapping into overall clause complexity–mean length of clause (MLC), subordination measures–finite dependent clauses per T-unit (DC/TU) and nonfinite elements per clause (NFE/C), phrasal coordination measure–coordinate phrase per verb phrase (CP/VP), and noun-phrase complexity measure–complex noun phrases per verb phrase (CNP/VP).

Intended measures` indices in this study were the proportions of T-units (T/S), overall clause complexity (MLC), and of dependent clauses to clauses (DC/C). T-units are usually formed in full clauses and sentences (Norris & Ortega, 2009). The number of words divided by the number of clauses, which according to Foster and Skehan (1996) is a reliable measure of clause coordination, measures overall clause complexity (MLC). The last intended measure in the current study is the proportion of dependent clauses to clauses (DC/C), which examines the degree of subordination in the text (Wolfe-Quintero, Inagaki, & Kim, 1998).

Writing Discourse Memory Representation Measures

Concerning the writing discourse memory representation, three type(s) of memory representation were investigated through analyzing writing papers` introductions or conclusions with. The syntactic phrase structure as well as the choice and order of words of the candidates` writing papers for surface representation (surface representation), the semantic content of the candidates` writing papers, sets of co-referential propositions is often used to describe the content for base text (propositional representation) and the state of affairs alluded by the candidates` writing papers; the spatial or visual representation of the entities described by the candidates` writing papers` content and their relationships (situational representation).

Statistical Analysis

The data analysis method was based on a codification scheme adapted from Yang, Lu, and Weigle, 2002 and a software analyzer known as the Lexical and Syntactic Complexity Analyzer (Lu, 2010, 2012). To address research question one on the effects of the type of lexical and syntactic complexity of writing the task on CAF of language production, separate descriptive analyses using all possible lexical and syntactic sub-constructs regression were conducted for each task, with writing topic as the dependent variable and selected CAF features as the predictor variables. The proportion of the predictive power for all intended lexical sub-constructs, already calculated by Lexical and Syntactic Complexity Analyzer in each candidate`s writing paper`s introduction or conclusion, was compared with the proportion of each intended IELTS writing topic`s lexical sub-constructs through conducting repeated measure test and Paired Samples Test, and the same was done for all intended syntactic complexity sub-levels.

In regard to the second research question, the candidates` writing papers` introductions or conclusions were compared with what they had grasped from the writing topics in order to distinguish the type(s) of memory representation through calculating the total number of each memory representation feature(s) in all writing topics and the total number of each memory representation feature(s) in candidates` writing papers` introductions or conclusions mentioned above. Then the proportions of all writing discourse memory representations in both total writing topics and total writing papers` introductions or conclusions were compared by current versions of popular statistical analysis program, SPSS (version 23) as to which level of memory representation was mostly applied by IELTS candidates in their writing papers.

Results

The first research question of the current study explored which linguistic measures were mostly applied by IELTS candidates while writing an overview. In doing so, first, the average of some major syntactic complexity measures in 10 topics and 100 overviews were calculated. Then, the intended syntactic and lexical complexity measures in 10 randomly selected writing topics and 100 overviews were analyzed, and the average application of each was calculated. In doing so, a set of descriptive statistics was used to discover the potential significant differences among the three intended syntactic and lexical measures. Means, standard error of mean and standard deviations of these measures are presented in Tables 1,2,3,4 and 5.

image

We use cookies to give you the best experience possible. By continuing we’ll assume you board with our cookie policy.