The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
possible to go through the entire curriculum or to imagine how the individual courses, learning units, outcomes and branches of medicine are interrelated. This paper introduces an innovative analytical approach which helps to identify automatically the most frequent topics based on keyword extraction. Moreover, the
information and communication technologies. We introduce an algorithm for curriculum metadata automatic processing — automatic keyword extraction based on unsupervised approaches, and we demonstrate a real application during a process of innovation and optimization of medical education. The outputs of our pilot analysis
Consider an information repository whose content is categorized. A data item (in the repository) can belong to multiple categories and new data is continuously added to the system. In this paper, we describe a system, CS*, which takes a keyword query and returns the relevant top-K categories. In contrast, traditional
The search engine, keyword extraction is an important technique. In this paper, aiming at the defects of the traditional keyword extraction algorithm, we proposed an improved weight computation strategy. The experimental results show that, the improved method's results are significantly better results than the
How to find the teaching resources according to users' demand quickly and accurately on the Internet is urgent to be solved. This paper proposes a design of pretreatment for keyword-based search over network teaching resource database based on ontology. Firstly, the teaching ontology is created according to the
The paper describes a modality of generating sequences of tests found on different levels of difficulty based on certain keywords given by the user. A battery of tests is represented by a normal tree and every test is codified by a node within a tree. In this matter, every node contains as information a list of
This paper proposes a structure that automatically analyzes the parameters of Chinese test items. This structure utilizes latent semantic analysis (LSA) to analyze the relationships of keywords among all test items in an item bank. It also uses the similarity measure to calculate the similarity degree of keywords. We
enhanced. Three PDA integrated outdoor observation activities and worksheets were arranged for elementary school's ecology learning. A scoring rubric and two scoring methods, i.e. keyword pattern-matching (PM) and Latent Semantic Analysis (LSA), have been developed for rating the construct responses of worksheet. The scoring
optimized sequences of tests described by certain keywords which follow certain restrictions given by the user. These restrictions refer to matching keywords that characterize each test with the keywords given by the user. The user also has the choice to avoid as much as possibly certain tests that are characterized by some
randomness of the evaluation on examination paper, this paper demonstrates a representation model, which assists the educators to analyze the percentage of accuracy between examination paper specification (EPS) and JSU. The parameters used are cognitive level and score. Keyword extraction and classification approach based on
costs. Besides conventional functions in LMS, NOBASU has incorporated several original ones such as the keyword submission function for preparing and reviewing class contents. In this paper, we present two new characteristic functions for the Java programming education in NOBASU. The Java code test function verifies the
Writing and browsing education blogs has become one of the important methods of e-learning. Learners can search the interesting resources from these education blogs. However, the traditional blog search only provides keyword-based matching, lacking automatic extraction of learner interests and further interest-related
detail. The paper presents the similarity algorithm of domain keywords and common words respectively and integrates them into the question similarity. Experimental results show that the proposed method can achieve good performance and the system is applied.
With the gradual improvement of the level of educational information, the idea of education big data has gone deep into people's minds. More and more teaching staff and researchers have the consciousness of the data needs, and expect to do the data mining and learning analysis of education big data. However, the most realistic and basic problem they have in the process of carrying out research is...
A major drawback in today's information landscape is the distribution of resources across numerous repositories. Consequently, users interested in working with these resources need to either jump from repository to repository to find and possibly access them or rely on web search engines. In this paper, we discuss the problems arising from these ways and introduce the metadata based MACE approach...
The Web forum is a key tool in new knowledge building among students in learning management systems. Unfortunately, the huge number of messages makes difficult, for tutors and teachers, to quickly evaluate the progress of their students so, an automated support to the analysis is needed. Our solution relies on simple statistical indices inspired by the work in the text analysis field. The obtained...
. Precise extraction of valuable information from short text messages posted on social media (Twitter) is a collaborative task. In this paper, we analyze tweets to classify data and sentiments from Twitter more precisely. The information from tweets are extracted using keyword based knowledge extraction. Moreover, the
result based on string matching keyword based. Precision and recall of this method is low. This research proposes subject name search based on document content using weighted ontology. Ontology is built from extracted term. Each term is given a weight based on the number of its relation. User query is expanded based on its
search efficiency. A prototype system has been implemented in light of this approach. Given a keyword combined query, the system outputs a ranked list of relative results according to the semantic similarity. In the experiment, the system achieved the better result than traditional keyword based search.
information as network actor, e.g. institute department or keyword, to reflect Digital humanity research structures in macro, meso, and micro-levels, respectively. Digital humanity research projects are retrieved from GRB (Government Research Bulletin) database which archives research projects sponsored by Taiwan government. A
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.