The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
With the fast growth rate of information availability through the World Wide Web, search engines' ranking become limited to deal with such enormous amount of information. Web search engines should be enriched with methodologies that enable it to understand the content of Web pages, then to align pages to the correct query category that highly match its content. In this paper, a proposed system is...
The effect of different kinds of advertisement on the internet helps form certain attitudes, beliefs and perceptions in internet users. When they continuously see online advertisements, it may influence their purchase decision to some extent. Different ads occupy different amounts of space on the web page being viewed by the potential customers. They also are displayed on different parts of the web...
With the development of Web technology and more kinds of information, how to provide high quality, relevant search results become a huge challenge to the current Web search engines. We analyze the shortcomings of PageRank algorithm and Weighted PageRank algorithms and make targeted improvements. By judging the relation between different web pages based on web content, the improved PageRank algorithm...
Due to the trend of diversification of media in social network, the deep learning of multimedia data is becoming more and more important. This paper analyzes the special tag information of image, audio and video data on the internet, and proposes an effective recognition method of multimedia data for social network. The method will provide a lot of data for the multimedia content analysis of social...
Clickjacking is an attack that tricks victims into clicking on invisible elements of a web page to perform unin- tended actions that might be advantageous for the attacker. To defend against clickjacking, many techniques have been proposed, but it is still questionable whether they are effectively deployed in practice. We investigated how vulnerable Korean websites are to clickjacking attacks by performing...
There are hundreds of millions of searches performed through search engines daily. The increasing importance of search engines represents the end of the traditional outbound marketing strategy, now replaced with the reorientation of companies to inbound marketing strategies, presently a part of human existence. In the present electronic age the consumer has real time information available on all products,...
The paper presents introduction of the remote experiment "Emission of luminescent diodes" (http://remotelab8. truni. sk). In the paper, we present the example of the utilization of remote experiment in quantum theory, the theory of observed phenomenon and the evaluation of the measured data. The education activities were designed in accordance with the strategy of education Integrated e-Learning...
The home of the future should be a smart one, to support us in our daily life. Up to now only a few security incidents in that area are known. Depending on different security analyses, this fact is rather a result of the low spread of Smart Home products than the success of such systems security. Given that Smart Homes become more and more popular, we will consider current incidents and analyses to...
In past days, although we have focused on to collect required data, we can get required information since many data are storage and disclosed. Therefore, it has become a new task to search efficiently required information. Nowadays, the search engine such as Google, Bing and Baidu help us to search information in the internet. However, enormous number of search results is listed. In some cases, the...
WWW is a huge collection of information and most of the time people are engaged with it in order to retrieve different types of information. People want to get accurate and appropriate data at the top of search results in a user friendly manner. People also want to get a personal space over the internet when they are browsing on web, from this arises a need of personalization of the search history...
HTTP/2 is the next generation of the HTTP protocol, and as such, it is supposed to solve past issues and to bring improved performances. Through thorough experiments, we try to bring reliable figures on this new version of the protocol. To do this, we adopt a methodological approach, taking newly introduced features into consideration: compression, multiplexing, server push and priority. Previous...
The temporal information is an essential attribute in the web page, such as the publish time and the content time in the web page. However, the major search engine does not have more view on the temporal information of web page, and ignored the relationship between the keywords and time phrases. In this paper, we focus on the need in time phrases recognizing and extracting from the web page. We built...
This paper presents a construction method of Web Information extraction wrapper based on DOM is proposed. Combining XPath and pattern matching, it can deal with the two type of information at the same time under the guide of source and target knowledge library. Also, knowledge libraries help to extract more useful information for users. This paper introduces in detail the process of building the wrapper...
This paper presents mechanisms for identification of web traffic masqueraded behind encrypted Virtual Private Network (VPN) tunnels. Website identification using Traffic Analysis (TA) has many administrative applications including preventing access to forbidden websites and site-specific Quality of Service (QoS) provisioning. Previous works in this area mainly looked at the problem of identifying...
Nowadays, the Internet offers data to anyone at any time. Websites on the Internet have been warehousing data for many years ago, i.e., for 10 years and more. In the meantime, many websites have became obsolete. This means they no longer have owner because of either they have no-one to maintain them or they have become unavailable for indexing by spiders that retrieves information about documents...
Search engines are vital in the current digital world. Given the huge amount of information on the internet, search engines are vital tools that internet users are using to search web pages for the required information. However, most of the search engines currently in the market are inadequate and thus do not completely serve the needs of internet users. This is because in most cases they give results...
In the current diversity and complexity of the network information environment, the technology of web page sensitive keywords detection is an important and immediate way to manage public opinion online. We propose a system for web page sensitive keywords detection. This system can detect sensitive keywords in the web pages timely and effectively. And it will mark the position of the keywords in web...
Search Engine Optimization (SEO) is the process of improving a website's position in the Internet search engine results. Firstly, this paper, by using an engineering meta-model, analyzes a range of different approaches discussed in the literature into a meta-model. This meta-model explains the key differences, and illustrates the relationships between them. Secondly, by clarifying the links between...
The technology of Internet information source discovery on specific topic is the groundwork of information acquisition in current big data era. This paper presents a multi-seeds cocitation algorithm to find new Internet information sources. The proposed algorithm is based on cocitation, but what difference with the traditional algorithms is that we use multiple websites on specific topic as input...
The great popularity of the Internet increases the concern for the safety of its users as many malicious Web pages pop up in daily basis. Client honeypots are tools, which are able to detect malicious Web pages, which aim to infect their visitors. These tools are widely used by researchers and anti-virus companies in their attempt to protect Internet users from being infected. Unfortunately, cyber-criminals...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.