Vol.13 No.3&4 July 1, 2014
Component-Based, Client-Oriented Web Engineering:
Issues, Advancements and Opportunities
Editorial
(pp181-182)
Florian Daniel, Peter Dolog, and Qing Li
Component-based Web Engineering using Shared Components and Connectors
(pp183-202)
Stefania Leone, Alexandre de Spindler, Moira C.
Norrie, and Dennis McLeod
Today, web development platforms often follow a modular architecture
that enables platform extension. Popular web development frameworks such
as Ruby on Rails and Symfony, as well as content management systems
(CMS) such as WordPress and Drupal offer extension mechanisms that allow
the platform core to be extended with additional functionality. However,
such extensions are typically isolated units defining their own data
structures, application logic and user interfaces, and are difficult to
combine. We address the fact that applications need to be configured
more freely through the composition of such extensions. We present an
approach and model for component-based web engineering based on the
concept of components and connectors between them, supporting
composition at the level of the schema and data, the application logic
and the user interface. We have realised our approach in two popular web
development settings. First, we demonstrate how our approach can be
integrated into web development frameworks, thus bringing
component-based web engineering to the developer. Second, we present,
based on the example of WordPress, how advanced end-users can be
supported in component-based web engineering by integrating our approach
into CMS. The applicability of our approach in both settings
demonstrates its generality.
DireWolf Framework for Widget-based Distributed
User Interfaces
(pp203-222)
Dejan Kovachev, Dominik Renzel, Petru
Nicolaescu, Istvan Koren, and Ralf Klamma
Web applications have overcome traditional desktop applications
especially in collaborative settings. However, the bulk of Web
applications still follow the ``single user on a single device''
computing model. Therefore, we created the DireWolf framework for rich
Web applications with distributed user interfaces (DUIs) over a
federation of heterogeneous commodity devices supporting modern Web
browsers such as laptops, smart phones and tablet computers. The DUIs
are based on widget technology coupled with cross-platform inter-widget
communication (IWC) and seamless session mobility. Inter-widget
communication technologies connect the widgets and enable real-time
collaborative applications as well as runtime migration in our
framework. We show that the DireWolf framework facilitates the use case
of DUI-enabled semantic video annotation. For a single user it provides
more flexible control over different parts of an application by enabling
the simultaneous use of smart phones, tablets and computers. We
conducted a technical evaluation and two user studies to validate the
DireWolf approach. The work presented opens the way for creating
distributed Web applications which can access device specific
functionalities such as multi-touch, text input, etc. in a federated and
usable manner. In this paper, we also sketch our ongoing work to
integrate the WebRTC API into DireWolf, where we see opportunities for
potential adoption of DUI Web applications by the majority of Web users.
Efficient Development of Progressively Enhanced
Web Applications by Sharing Presentation and Business Logic
Between Server and Client (pp223-242)
Markus Ast, Stefan Wild, and Martin Gaedke
A Web application's codebase is typically divided into a server side
and a client side with essential functionalities being implemented
twice, such as validation or rendering. While developers can choose from
a rich set of programming languages to implement a Web application's
server side, they are bound to JavaScript for the client side. Recent
developments like Node.js allow using JavaScript in a simple and
efficient way also on the server side, but lack offering a common
codebase for the entire Web application. In this article, we present the
SWAC approach that aims at reducing development efforts and minimizing
coding errors in order to make creating Web applications more
efficiently. Based on our approach, we created the SWAC framework. It
enables establishing a unified Web application codebase that provides
both dynamic functionality and progressive enhancement by taking
characteristic differences between server and client into account.
Model-Based Rich Internet Applications Crawling:
"Menu" and "Probability" Models
(pp243-262)
Suryakant Choudhary, Emre Dincturk, Seyed
Mirtaheri, Ggregor v. Bochmann, Guy-Vincent Jourdan, and Iosif Viorel
Onut
Strategies for ``crawling'' Web sites efficiently have been
described more than a decade ago. Since then, Web applications have come
a long way both in terms of adoption to provide information and services
and in terms of technologies to develop them. With the emergence of
richer and more advanced technologies such as AJAX, ``Rich Internet
Applications'' (RIAs) have become more interactive, more responsive and
generally more user friendly. Unfortunately, we have also lost our
ability to crawl them. Building models of applications automatically is
important not only for indexing content, but also to do automated
testing, automated security assessments, automated accessibility
assessment and in general to use software engineering tools. We must
regain our ability to efficiently construct models for these RIAs. In
this paper, we present two methods, based on ``Model-Based Crawling''
(MBC) first introduced in \cite{ICWE2011}: the ``menu'' model and the
``probability'' model. These two methods are shown to be more effective
at extracting models than previously published methods, and are much
simpler to implement than previous models for MBC. A distributed
implementation of the probability model is also discussed. We compare
these methods and others against a set of experimental and ``real" RIAs,
showing that in our experiments, these methods find the set of client
states faster than other approaches, and often finish the crawl faster
as well.
Other Research Articles:
An Improved Ant Colony Algorithm for Effective Mining
of Frequent Items (pp263-276)
Suriya Sundaramoorthy and S.P. Shantharajah
Data Mining involves discovery of required
potentially qualified content from a heavy collection of heterogeneous
data sources. Two decades passed, still it remains the interested area
for researchers. It has become a flexible platform for mining engineers
to analyse and visualize the hidden relationships among the data
sources. Association rules have a strong place in representing those
relationships by framing suitable rules. It has two powerful parameters
namely support and confidence which helps to carry out framing of such
rules. Frequent itemset mining is also termed to be frequent pattern
mining. When the combination of items increases rapidly, we term it to
be a pattern. The ultimate goal is to design rules over such frequent
patterns in an effective manner i.e in terms of time complexity and
space complexity. The count of evolutionary algorithms to achieve this
goal is increasing day by day. Bio Inspired algorithms holds a strong
place in machine learning, mining, evolutionary computing and so on. Ant
Colony Algorithm is one such algorithm which is designed based on
behaviour of biological inspired ants. This algorithm is adopted for its
characteristic of parallel search and dynamic memory allocation. It
works comparatively faster than basic Apriori algorithm, AIS, FP Growth
algorithm. The two major parameters of this algorithm are pheromone
updating rule and transition probability. The basic ant colony algorithm
is improved by modifying the pheromone updating rule in such way to
reduce multiple scan over data storage and reduced count of candidate
sets. The proposed approach was tested using MATLAB along with WEKA
toolkit. The experimental results prove that the stigmeric communication
of improved ant colony algorithm helps in mining the frequent items
faster and effectively than the above stated existing algorithms.
Improving Search and Exploration in Tag Spaces
Using Automated Tag Clustering(pp277-301)
Joni Radelaar, Aart-Jan Boor, Damir Vandic,
Jan-Willem van Dam, and Flavius Fasincar
In recent years we have experienced an increase in the usage of tags
to describe resources. However, the free nature of tagging presents some
challenges regarding the search and exploration of tag spaces. In order
to deal with these challenges we propose the Semantic Tag Clustering
Search (STCS) framework. The framework first groups syntactic variations
using several measures based on the Levenshtein distance and the cosine
similarity based on tag co-occurrences. We find that a measure that
combines the newly introduced variable cost Levenshtein similarity
measure with the cosine similarity significantly outperforms the other
methods we evaluated in terms of precision. After grouping syntactic
variations, the framework clusters semantically related tags using the
cosine similarity based on tag co-occurrences. We compare the STCS
framework to a state-of-the-art clustering technique and find that the
STCS framework performs significantly better in terms of precision. For
the evaluation we used a large data set gathered from Flickr, which
contains all the pictures uploaded in the year 2009.
A Conceptual Cohesion Metric for Service Oriented
Systems
(pp302-332)
Ali Kazemi, Ali Rostampour, Hassan Haghighi,
and Sahel Abbasi
Service conceptual cohesion has an incredible
impact on the reusability and maintainability of service-oriented
software systems. Conceptual cohesion indicates the degree of focus of
services on a single business functionality. Current metrics for
measuring service cohesion reflect the structural aspect of cohesion and
therefore cannot be utilized to measure conceptual cohesion of services.
Latent Semantic Indexing (LSI), on the other hand, is an information
retrieval technique widely used to measure the degree of similarity
between a set of text based documents. In our previous work, a metric,
namely SCD (Service Cohesion Degree), has been proposed that measures
conceptual cohesion of services based on the LSI technique. SCD provides
a quantitative evaluation to measure how much a service concentrates on
a single business functionality. In addition, SCD is applied in the
service identification step, i.e., when services are not yet available,
and the designer plans for developing services with high cohesion. This
paper has two contributions in comparison to our previous work. At
first, it resolves two anomalies occurring in our previous method when
calculating conceptual relationship between service operations.
Secondly, as the main contribution of the paper, it presents details of
a theoretical validation and an empirical evaluation of SCD. By using a
small-scale controlled study, the empirical evaluation demonstrates that
SCD could measure conceptual cohesion of services acceptably.
Webpage Clustering – Taking the Zero Step: a Case
Study of an Iranian Website
(pp333-360)
Abbas Keramati and Ruholla Jafari-Marandi
The expansion of websites and their too many
pages not only have pushed their visitors to frustration but also have
made the websites ever more difficult to be managed and controlled by
their owners. In the past few years data mining (clustering) has been of
great help so as to assist website’s owner to address the complexities
related to owners’ extracting their visitor’s preferences and their
coming to know their websites properly. In this line of literature, this
paper contains several parts and features. First, with regard to the
fact that SOM has been the popular algorithm in dealing with page
clustering, a comparison between SOM and K-means (another popular
clustering algorithm) were performed to show the superiority of SOM in
dealing with the task of webpage clustering.
Second, due to the clustering tasks’ complication not being able
to be tested (unlike Classification), this study aims at proposing a
mind-set by which one before taking any other actions has to go through
some steps in order to choose the best set of data. Thirdly, looking at
the literature, one can see the question about the suitability of types
of data (content, structure and usage) and the task they are being used
for has never been raised. Using an Iranian
website’s data, a field study and SOM algorithm, we presented that the
popular belief about the type of data and the task they are appropriate
for should be open to doubt. It was also depicted that different sets of
data in two chosen tasks – webpage profiling and extracting visitors’
preference - can influence the results tremendously.
Last but not least, apart from observing the influence of
different sets of data, both data mining tasks have been performed to
the end and the results are presented in the paper.
Additionally, using the second clustering task’s results (the
extraction of visitors’ preferences) a novel recommendation system is
presented. The recommendation system in question was installed in the
website for more than a month and its influence on the whole website is
observed and analysed.
Back
to JWE Online Front Page
|