Browsing by Author "Coupland, Simon"
Now showing 1 - 20 of 81
Results Per Page
Sort Options
Item Open Access Adaptive-mutation compact genetic algorithm for dynamic environments(Springer, 2016-06-07) Gongora, Mario Augusto; Coupland, Simon; Passow, Benjamin N.; Uzor, C. J.Item Metadata only The application of colour FIRE to robot vision.(IEEE, 2010) Croft, D.; Coupland, SimonItem Metadata only An approach to type-2 fuzzy arithmetic(2003) Coupland, Simon; John, Robert, 1955-Item Metadata only Assessing the Provenance of Student Coursework(2024-07-09) Coupland, SimonThe Higher Education sector is mobilising vast resources in its response to the use of Generative AI in student coursework. This response includes institutional policies, training for staff and students and AI detection tools. This paper is concerned with one aspect of this fast-moving area; the assessment of the provenance of a piece of written student coursework. The question of the provenance of student work is a surprisingly complex one, which, in truth can only ever be answered by the student themselves. As academics we must understand the difference between checking for plagiarism and generative AI use. When assessing a student's possible use of generative AI there is no ground truth for us to test against and this makes the detection of AI use a completely different problem to plagiarism detection. A range of AI detection tools are available, some of which have been adopted within the sector. Some of these tools have high detection rates, however, most suffer with false positive rates meaning institutions would be falsely accusing hundreds of students per year of committing academic offences. This paper explores a different approach to this problem which complements the use of AI detection tools. Rather than examining the work submitted by a student, the author examines the creation and editing of the that work over time. This gives an understanding how a piece of work was written, and most importantly how it has been edited. Inspecting a documents history requires that it is written on a cloud-based platform with version history enabled. The author has created a tool which sits on top of the cloud-based platform and integrates with the virtual learning environment. The tool records each time a student digitally touches their work, and the changes are recorded. The tool interface gives an overview for a cohort, with the ability to delve more deeply into an individual submission. The result is an easily accessible interactive history of a document during its development, giving some kind of provenance to that document. This history of construction and editing, shows how a piece of written work has been crafted over time, providing useful evidence of academic practice. Data on the points where students digitally touch their work can also be useful beyond questions of academic practice. The Author gives an example of using a data-driven approach to give formative feedback and discusses how data-driven approaches could become common in teaching practice.Item Metadata only Authentic assessment supporting curriculum and delivery mode transformation(2023-07-12) Allman, Zoe; Coupland, Simon; Fahy, ConorDe Montfort University is embracing significant transformation as curriculum and delivery mode transitions into an intensive block model approach. The Computer Games Programming (CGP) team were particularly innovative in their approach (Jones, 2022), completely revisiting curriculum sequencing and assessment methods to facilitate the best learning journey for students, and responding to employer and sector skills needs. This presentation highlights two examples of authentic assessments emerging from university-wide transformation. The digital economy requires graduates equipped with a set of digital skills which are practice based. CGP had been moving away from traditional written exams towards large, in-depth coursework which students produce over a term or academic year. In this approach digital skills are implicitly assessed, for example using source control metadata to assess students’ capabilities with a specific tool chain. This requires examination of digital footprints over the term. Therefore, the model needed revisiting to facilitate block delivery and explicitly assess these skills through face-to-face practical assessments we call driving tests, replicating assessments that are commonplace in other disciplines (Snodgrass et al, 2014; Kent-Waters et al, 2018). The depth of student knowledge is examined with a professional conversation, replicating assessment methods used in teacher and lecturer training (Britt et al, 2001). A driving test involves a student sitting with a tutor whilst being asked to perform a number of sequential pre-scripted tasks. Students are marked on the breadth of tasks they complete and the manner in which they complete them. The student is given immediate and personalised verbal feedback and an overall mark. The student leaves a digital trail which is used for moderation. Professional conversations introduce further diversity in assessment in level 6. These conversations supplement a practical assessment component and assess descriptors which can be difficult to evaluate in more traditional formats, for example identifying emerging issues at the forefront of the subject, and systematically identifying personal learning needs. Preparatory, co-created conversations highlighted that current level 6 learners would value this ‘technical interview’ format as the conversation allow learners to naturally demonstrate their understanding of the subject without additional coursework documentation/production. Students value these approaches that facilitate authentic demonstration of practical skills with tutor support and instant verbal feedback. As these assessment methods embed, there is ongoing consideration of whether these should be time limited activities; our experience suggests it should not as to date students have required different amounts of time to complete the task whilst demonstrating competency.Item Metadata only The collapsing method of defuzzification for discretised interval type-2 fuzzy sets.(Elsevier, 2009-06) Greenfield, Sarah; Chiclana, Francisco; Coupland, Simon; John, Robert, 1955-Item Metadata only The Collapsing method of defuzzification for discretised interval type-2 fuzzy sets.(2007) Greenfield, Sarah; Chiclana, Francisco; John, Robert, 1955-; Coupland, SimonItem Open Access A comparative study of fuzzy logic controllers for autonomous robots.(2006-07-01) Coupland, Simon; Gongora, Mario Augusto; John, Robert, 1955-; Wills, K.Item Metadata only Design pattern recognition by using adaptive neuro fuzzy inference system(IEEE, 2013) Alhusain, S.; Coupland, Simon; John, Robert, 1955-; Kavanagh, M.Item Metadata only Designing Generalised Type-2 Fuzzy Logic Systems using Interval Type-2 Fuzzy Logic Systems and Simulated Annealing(IEEE, 2012) Almaraashi, Majid; John, Robert, 1955-; Coupland, SimonItem Metadata only Developing and delivering in block: Reflections one year in(Quality Assurance Agency, 2023-09-21) Allman, Zoe; Coupland, Simon; Attwood, Luke; Fahy, Conor; Hasshu, Salim; Khuman, A. S.; Shell, JethroItem Metadata only Embedded helicopter heading control using an adaptive network-based fuzzy inference system(2007) Passow, Benjamin N.; Coupland, Simon; Gongora, Mario AugustoItem Metadata only Enhanced interval approach for encoding words into interval type-2 fuzzy sets and convergence of the word FOUs.(IEEE, 2010) Coupland, Simon; Mendel, Jerry M., 1938-; Wu, D.Item Metadata only Enhanced Interval Approach for Encoding Words into Interval Type-2 Fuzzy Sets and Its Convergence Analysis(2011) Wu, D.; Mendel, Jerry M., 1938-; Coupland, SimonConstruction of interval type-2 fuzzy setmodels is the first step in the perceptual computer, which is an implementation of computing with words. The interval approach (IA) has, so far, been the only systematic method to construct such models from data intervals that are collected from a survey. However, as pointed out in this paper, it has some limitations, and its performance can be further improved. This paper proposes an enhanced interval approach (EIA) and demonstrates its performance on data that are collected from a web survey. The data part of the EIA has more strict and reasonable tests than the IA, and the fuzzy set part of the EIA has an improved procedure to compute the lower membership function. We also perform a convergence analysis to answer two important questions: 1) Does the output interval type-2 fuzzy set from the EIA converge to a stable model as increasingly more data intervals are collected, and 2) if it converges, then how many data intervals are needed before the resulting interval type-2 fuzzy set is sufficiently similar to the model obtained from infinitely many data intervals? We show that the EIA converges in a mean-square sense, and generally, 30 data intervals seem to be a good compromise between cost and accuracy.Item Metadata only Enhancing real-time sports commentary generation with dramatic narrative devices.(Springer, 2010) Rhodes, Martin; Coupland, Simon; Cruickshank, T.Item Metadata only An evolutionary approach to simulated football free kick optimisation.(2009) Rhodes, Martin; Coupland, SimonItem Metadata only An evolutionary approach to the optimisation of sport simulations: direct free kicks in football(2007) Rhodes, Martin; Coupland, SimonItem Metadata only Extensions to type-1 fuzzy: type-2 fuzzy logic and uncertainty(IEEE Computational Intelligence Society, 2006) John, Robert, 1955-; Coupland, SimonItem Metadata only A fast and efficient semantic short text similarity metric(IEEE, 2013) Croft, D.; Coupland, Simon; Shell, Jethro; Brown, S.Item Metadata only A fast and efficient semantic short text similarity metric(IEEE, 2012-09-09) Shell, Jethro; Coupland, Simon; Croft, David; Brown, StephenThe semantic comparison of short sections of text is an emerging aspect of Natural Language Processing (NLP). In this paper we present a novel Short Text Semantic Similarity (STSS) method, Lightweight Semantic Similarity (LSS), to address the issues that arise with sparse text representation. The proposed approach captures the semantic information contained when comparing text to process the similarity. The methodology combines semantic term similarities with a vector similarity method used within statistical analysis. A modification of the term vectors using synset similarity values addresses issues that are encountered with sparse text. LSS is shown to be comparable to current semantic similarity approaches, LSA and STASIS, whilst having a lower computational footprint.