abstract
stringlengths
0
11.1k
authors
stringlengths
9
1.96k
title
stringlengths
4
353
__index_level_0__
int64
3
1,000k
Semantic labeling is a powerful transformation technique to prove termination of term rewrite systems. The dual technique is unlabeling. For unlabeling it is essential to drop the so called decreasing rules which sometimes have to be added when applying semantic labeling. We indicate two problems concerning unlabeling and present our solutions.#R##N##R##N#The first problem is that currently unlabeling cannot be applied as a modular step, since the decreasing rules are determined by a semantic labeling step which may have taken place much earlier. To this end, we give an implicit definition of decreasing rules that does not depend on any knowledge about preceding labelings.#R##N##R##N#The second problem is that unlabeling is in general unsound. To solve this issue, we introduce the notion of extended termination problems. Moreover, we show how existing termination techniques can be lifted to operate on extended termination problems.#R##N##R##N#All our proofs have been formalized in Isabelle/HOL as part of the IsaFoR/CeTA project.
['Christian Sternagel', 'René Thiemann']
Modular and Certified Semantic Labeling and Unlabeling
135,750
['Trevor R. Shaddox', 'Patrick B. Ryan', 'Martijn J. Schuemie', 'David Madigan', 'Marc A. Suchard']
Hierarchical models for multiple, rare outcomes using massive observational healthcare databases.
984,930
['Ngo Anh Vien', 'Peter Englert', 'Marc Toussaint']
Policy Search in Reproducing Kernel Hilbert Space.
993,988
Quantum computation has suggested some new forms of quantum logic (called quantum computational logics), where meanings of sentences are identified with quantum information quantities. This provides a mathematical formalism for an abstract theory of meanings that can be applied to investigate different kinds of semantic phenomena (in social sciences, in medicine, in natural languages and in the languages of art), where both ambiguity and holism play an essential role.
['Maria Luisa Dalla Chiara', 'Roberto Giuntini', 'Roberto Leporini']
Holism, ambiguity and approximation in the logics of quantum computation: a survey
229,368
Compares several modulation schemes that employ trellis shaping to control the envelope fluctuations of a bandlimited 8-PSK signal. The goal is to reduce the distortions introduced by nonlinearities in the transmission path. The considered schemes differ in rate and constraint length of the convolutional encoder, which is the basis of the shaping process. Additional coding of the least significant bit compensates the shaping loss. Thus, bit error rates comparable to that of conventional QPSK are achieved. The results show that even shaping schemes with very low complexity provide the redundancy required to sufficiently influence the symbol transitions. >
['Manfred Litzenburger', 'Werner Rupprecht']
A comparison of trellis shaping schemes for controlling the envelope of a bandlimited PSK-signal
496,590
In this paper we present a serious game, Lewispace, where we focus on measuring and using Electroencephalograms in order to detect how the learner reasons in the game. We track learner's reasoning according to different regions of the brain. Four standard lobes were taken into consideration: frontal, parietal, occipital and temporal. Each lobe was measured for each participant. We also studied the lobes measures distribution for all the participants. We found that some regions are more related to learner's vision and reflexion during the game and this could be an indice that the learner follows the correct reasoning process. Primary results show that our game enhance learners' performance. Moreover, the learners use almost occipital lobe to visualize the task presented in the game and the frontal lobe for the reasoning process.
['Ramla Ghali', 'Claude Frasson', 'Sébastien Ouellet']
Using Electroencephalogram to Track Learner's Reasoning in Serious Games
851,241
This paper considers the cutting stock problem with frustum of cone bars. A multiple objective optimization model is established by taking into account trim loss, the number of cutting patterns and usable leftovers. A decision-making method for solving this cutting stock problem is designed. First, an improved non-dominated sorting heuristic evolutionary algorithm is developed for generating the Pareto non-dominated solutions. Then the weights of the objectives are calculated by combining the subjective methods (subjectively determined by the decision maker) and objective methods (objectively determined by numerical computing). Finally, a multi-attribute decision making method is used for choosing a cutting plan from the Pareto non-dominated solutions. Computational results indicate that the method proposed is feasible.
['Lin Liu', 'Xinbao Liu', 'Jun Pei', 'Wenjuan Fan', 'Panos M. Pardalos']
A study on decision making of cutting stock with frustum of cone bars
584,705
We consider a global regulation problem of a class of nonlinear systems that have uncertain high-order feedforward and non-feedforward nonlinear terms. There also exists an unknown time-varying delay in the main control input. While many existing results have employed predictor methods to deal with the input delay, we propose a non-predictor controller with a dynamic gain such that it does not require any memories of past input values nor any information on the time-varying delay. Moreover, we introduce new high-order conditions on feedforward and non-feedforward terms along with unknown time-varying input delay. A class of extended nonlinear systems which are applicable by our controller are identified. Via numerical and application examples, we demonstrate that the nonlinear time-delay systems which cannot be dealt by the existing methods are successfully regulated by our non-predictor state feedback controller.
['Min-Sung Koo', 'Ho-Lim Choi']
Non-predictor controller for feedforward and non-feedforward nonlinear systems with an unknown time-varying delay in the input
557,264
We study the interactive compression of an arbitrary function of two discrete sources with zero-error. The information on the joint distribution of the sources available at the two sides is asymmetric, in that one user knows the true distribution, whereas the other user observes a different distribution. This paper considers the minimum worst-case zero-error codeword length under such asymmetric prior distributions. We investigate the cases for which reconciling the information mismatch is better or worse than not reconciling it, but instead using an encoding scheme that ensures zero-error with possibly increased communication rate. Our results indicate a reconciliation-communication tradeoff and that there exist cases for which partially reconciling the mismatched information is better than both perfect reconciliation and no reconciliation.
['Basak Guler', 'Aylin Yener', 'Ebrahim MolavianJazi', 'Prithwish Basu', 'Ananthram Swami', 'Carl Andersen']
Interactive Function Compression with Asymmetric Priors
969,321
Graphics Processing Units (GPUs) are used as general purpose parallel accelerators in a wide range of applications. They are found in most computing systems, and mobile devices are no exception. The recent availability of programming APIs such as OpenCL for mobile GPUs promises to open up new types of applications on these devices. However, producing high performance GPU code is extremely difficult. Subtle differences in device characteristics can lead to large performance variations when different optimizations are applied. As we will see, this is especially true for a mobile GPU such as the ARM Mali GPU which has a very different architecture than desktop-class GPUs. Code optimized and tuned for one type of GPUs is unlikely to achieve the performance potential on another type of GPUs. Auto-tuners have traditionally been an answer to this performance portability challenge. For instance, they have been successful on CPUs for matrix operations, which are used as building blocks in many high-performance applications. However, they are much harder to design for different classes of GPUs, given the wide variety of hardware characteristics. In this paper, we take a different perspective and show how performance portability for matrix multiplication is achieved using a compiler approach. This approach is based on a recently developed generic technique that combines a high-level programming model with a system of rewrite rules. Programs are automatically rewritten in successive steps, where optimizations decision are made. This approach is truly performance portable, resulting in high-performance code for very different types of architectures such as desktop and mobile GPUs. In particular, we achieve a speedup of 1.7x over a state-of-the-art auto-tuner on the ARM Mali GPU.
['Michel Steuwer', 'Toomas Remmelg', 'Christophe Dubach']
Matrix multiplication beyond auto-tuning: rewrite-based GPU code generation
846,846
Renewable energy sources are key enablers to decrease greenhouse gas emissions and to cope with the anthropogenic global warming. Their intermittent behaviour and limited storage capabilities present challenges to power system operators in maintaining the high level of power quality and reliability. However, the increased availability of advanced automation and communication technologies has provided new intelligent solutions to face these challenges. Previous work has presented various new methods to operate highly interconnected power grids with corresponding components in a more effective way. As a consequence of these developments the traditional power system is transformed into a cyber-physical system, a smart grid.
['Thomas Strasser', 'Filip Andren', 'Georg Lauss', 'R. Bründlinger', 'Helfried Brunner', 'Cyndi Moyo', 'Christian Seitl', 'Sebastian Rohjans', 'Sebastian Lehnhoff', 'Peter Palensky', 'Panos Kotsampopoulos', 'Nikos D. Hatziargyriou', 'Gunter Arnold', 'Wolfram Heckmann', 'Erik Jong', 'Maurizio Verga', 'Giorgio Franchioni', 'Luciano Martini', 'Anna Magdalena Kosek', 'Oliver Gehrke', 'Henrik W. Bindner', 'Federico Coffele', 'Graeme Burt', 'Mihai Calin', 'Emilio Rodriguez-Seco']
Towards holistic power distribution system validation and testing—an overview and discussion of different possibilities
951,451
This paper gives a method of flexible hypersurface fitting with RBF kernel functions. In order to fit a hypersurface to a given set of points in an Euclidean space, we can apply the hyperplane fitting method to the points mapped to a high dimensional feature space. This fitting is equivalent to a one-dimensional reduction of the feature space by eliminating the linear space spanned by an eigenvector corresponding to the smallest eigenvalue of a variance covariance matrix of data points in the feature space. This dimension reduction is called minor component analysis MCA, which solves the same eigenvalue problem as kernel principal component analysis and extracts the eigenvector corresponding to the least eigenvalue. In general, feature space is set to an Euclidean space, which is a finite Hilbert space. To consider an MCA for an infinite Hilbert space, a kernel MCA KMCA, which leads to an MCA in reproducing kernel Hilbert space, should be constructed. However, the representer theorem does not hold for a KMCA since there are infinite numbers of zero-eigenvalues would appear in an MCA for the infinite Hilbert space. Then, the fitting solution is not determined uniquely in the infinite Hilbert space, contrary to there being a unique solution in a finite Hilbert space. This ambiguity of fitting seems disadvantageous because it derives instability in fitting, but it can realize flexible fitting. Based on this flexibility, this paper gives a hypersurface fitting method in the infinite Hilbert space with RBF kernel functions to realize flexible hypersurface fitting. Although some eigenvectors of the matrix defined from kernel function at each sample are considered, we have a candidate of a reasonable solution among the simulation result under a specific situation. It is seen that the flexibility of our method is still effective through simulations.
['Jun Fujiki', 'Shotaro Akaho']
Flexible Hypersurface Fitting with RBF Kernels
84,671
The authors are collaborating with a manufacturer of custom built steel frame modular units which are then transported for rapid erection onsite volumetric building system. As part of its strategy to develop modular housing, Enemetric, is taking the opportunity to develop intelligent buildings, integrating a wide range of sensors and control systems for optimising energy efficiency and directly monitoring structural health. Enemetric have recently been embracing Building Information Modeling BIM to improve workflow, in particular cost estimation and to simplify computer aided manufacture CAM. By leveraging the existing data generated during the design phases, and projecting it to all other aspects of construction management, less errors are made and productivity is significantly increased. Enemetric may work on several buildings at once, and scheduling and priorities become especially important for effective workflow, and implementing Enterprise Resource Planning ERP. The parametric nature of BIM is also very useful for improving building management, whereby real-time data collection can be logically associated with individual components of the BIM stored in a local Building Management System performing structural health monitoring and environmental monitoring and control. BIM reuse can be further employed in building simulation tools, to apply simulation assisted control strategies, in order to reduce energy consumption, and increase occupant comfort.
['Amar Seeam', 'Tianxin Zheng', 'Yong Lu', 'Asif Usmani', 'David Laurenson']
BIM Integrated Workflow Management and Monitoring System for Modular Buildings
52,178
Abstract Many studies have suggested that the design of the tablet screen could give an effect to the tablet users’ performance. The purpose of this study is to investigate the effects of screen background colors on the brain functions for elderly and young people when they are performing a task on a tablet computer. Twenty university students and 10 elderly people were recruited for participating in the experiment. The subjects were told to count the number of circles on a five different background colors, which are white, blue, yellow, red, and green randomly. This step was done in a short period of time. The average percentages of correct answers for the circle counting tasks that the subjects performed were higher with all background colors for both young and elderly people compared to the white background color. The results indicate that white color may not be the best choice for a background color of a tablet screen for best performance and attention for both young and elderly people.
['Muhammad Nur Adilin Mohd Anuardi', 'Hideyuki Shinohara', 'Atsuko K. Yamazaki']
A Pre-study of Background Color Effects on the Working Memory Area of the Brain☆
868,723
['Elena Maceviciute']
Review of: Huotari, Maija-Leena and Iivonen, Mirja, (Eds.) Trust in knowledge management and systems in organisations. Hershey, PA; London: Idea Group Publishing, 2004. ISBN 1-59140-220-4.
781,718
['Jing Wang', 'Ande Chang', 'Lianxing Gao']
Binary Probit Model on Drivers Route Choice Behaviors Based on Multiple Factors Analysis
947,311
The present work was carried out to design and develop novel protein kinase casein kinase 2 inhibitors of benzimidazole derivatives using 2D-QSAR, 3D-QSAR, and pharmacophore modeling. The pharmacophore models were observed to be in good correlation with 2D-QSAR and 3D-QSAR predicted activities and had the correlation coefficients (r2) of 0.7832 and 0.7483, respectively. The activities predicted by 2D-QSAR and 3D-QSAR were also observed to be highly correlated pred_r2 = 0.7509 and 0.6821, respectively. The high value of F ratio (26.853), the low value of standard error and standard error of cross-validation support the above finding. QSAR approach study has revealed that the substitution of hydrophobic group in the benzimidazole ring at 4th and 5th position is unfavorable, while substitution of less bulkier group at 6th position is favorable for CK2 inhibitors inhibitory activity. 2D- and 3D-QSAR analyses of such derivatives provide important structural insights for designing potent casein kinase 2 inhibitors.
['Mukesh C. Sharma']
Rationalization of physicochemical characters and structural determinants of benzimidazole analogues as casein kinase 2 inhibitors: computational approach
938,666
In this paper, we propose G-Cons, an extension of a graph minimal coloring paradigm for consensus clustering. Based on the co- association values between data, our approach is a graph partitioning one which yields a combined partition by maximizing an objective function given by the average mutual information between the consensus partition and all initial combined clusterings. It exhibits more important consensus clustering features (quality and computational complexity) and enables to build a combined partition by improving the stability and accuracy of clus- tering solutions. The proposed approach is evaluated against benchmark databases and promising results are obtained compared to other consensus clustering techniques.
['Haytham Elghazel', 'Khalid Benabdeslem', 'Fatma Hamdi']
Consensus clustering by graph based approach
675,239
The syntactic ambiguity of a transitive verb (Vt) followed by a noun (N) has long been a problem in Chinese parsing. In this paper, we propose a classifier to resolve the ambiguity of Vt-N structures. The design of the classifier is based on three important guidelines, namely, adopting linguistically motivated features, using all available resources, and easy integration into a parsing model. The linguistically motivated features include semantic relations, context, and morphological structures; and the available resources are treebank, thesaurus, affix database, and large corpora. We also propose two learning approaches that resolve the problem of data sparseness by autoparsing and extracting relative knowledge from large-scale unlabeled data. Our experiment results show that the Vt-N classifier outperforms the current PCFG parser. Furthermore, it can be easily and effectively integrated into the PCFG parser and general statistical parsing models. Evaluation of the learning approaches indicates that world knowledge facilitates Vt-N disambiguation through data selection and error correction.
['Yu-Ming Hsieh', 'Jason S. Chang', 'Keh-Jiann Chen']
Ambiguity Resolution for Vt-N Structures in Chinese
612,524
['Anirudh Agarwal', 'Shivangi Dubey', 'Ranjan Gangopadhyay', 'Soumitra Debnath']
Secondary User QoE Enhancement Through Learning Based Predictive Spectrum Access in Cognitive Radio Networks
855,548
ABSTRACTThis article proposes a simple nonparametric method to estimate the jump characteristics in asset price with noisy high-frequency data. We combine the pre-averaging approach and the threshold technique to identify the jumps, and then propose the pre-averaging threshold estimators for the number and sizes of jumps occurred. We further present the asymptotic properties of the proposed estimators. The Monte Carlo simulation shows that the estimators are robust to microstructure noise and work very well especially when the data frequency is ultra-high. Finally, an empirical example further demonstrates the power of the proposed method.
['Chao Yu', 'Xujie Zhao', 'Bo Zhang']
Nonparametric estimation of jump characteristics under market microstructure noise
902,136
The perturbation based extremum seeking control is equivalent to fitting a linear model to the data and computing the objective function gradient from this fitted linear model. The present paper asks the question of whether increasing the model complexity e.g., by using a quadratic or a polynomial model of higher order would improve precision. It is shown that when the objective function is available for feedback and provided that the amplitude and the frequency of the excitation signal are sufficiently small, the order of the error made on the optimizing control is independent of the static model order.
['Moncef Chioua', 'B. Srinivasan', 'Martin Guay', 'Michel Perrier']
Model adequacy for a precise optimization using extremum seeking control
977,844
In this work we consider a stabilized Lagrange (or Kuhn–Tucker) multiplier method in order to approximate the unilateral contact model in linear elastostatics. The particularity of the method is that no discrete inf-sup condition is needed in the convergence analysis. We propose three approximations of the contact conditions well adapted to this method and we study the convergence of the discrete solutions. Several numerical examples in two and three space dimensions illustrate the theoretical results and show the capabilities of the method.
['Patrick Hild', 'Yves Renard']
A stabilized Lagrange multiplier method for the finite element approximation of contact problems in elastostatics
383,962
A visually-optimal quantization and rate-control strategy based on results of recent contrast sensitivity and suprathreshold summation experiments is proposed. At suprathreshold contrasts, masked detection thresholds for wavelet subband quantization distortions were approximately equal for scale-3, 4, and 5 distortions; approximately 52% greater for scale-2 distortions; and approximately 84% greater for scale-1 distortions. Based on a suprathreshold error-pooling model, contrasts for individual subbands are selected to match these contrast ratios, and are adjusted to account for changes in relative sensitivity at suprathreshold contrasts. Quantization step sizes are then computed from the adjusted base contrasts. A target contrast is estimated from the desired rate, and rate control is performed by adjusting this contrast until the rate is met. Images compressed with the proposed algorithm show improved visual quality at low bit rates.
['Damon M. Chandler', 'Sheila S. Hemami']
Contrast based quantization and rate control for wavelet coded images
477,550
The aim of the research described is to overcome important speech-modeling limitations of conventional hidden Markov models (HMMs), by developing a dynamic segmental HMM which models the changing pattern of speech over the duration of some phoneme-type unit. As a first step towards this goal, a static segmental HMM has been implemented and tested. This model reduces the influence of the independence assumption by using two processes to model variability due to long-term factors separately from local variability that occurs within a segment. Experiments have demonstrated that the performance of segmental HMMs relative to conventional HMMs is dependent on the "quality" of the system in which they are embedded. On a connected-digit recognition task for example, static segmental HMMs outperformed conventional HMMs for triphone systems but not for a vocabulary-independent monophone system. It is concluded that static segmental HMMs improve performance, as long as the system is such that the independence assumption is a major limiting factor.
['Wendy J. Holmes', 'Martin J. Russell']
Experimental evaluation of segmental HMMs
481,799
ABSTRACTThis paper deals with the design of interval observers for singularly perturbed linear systems. The full-order system is first decoupled into slow and fast subsystems. Then, using the cooperativity theory, an interval observer is designed for the slow and fast subsystems assuming that the measurement noise and the disturbances are bounded and the singular perturbed parameter is uncertain. This decoupling leads to two observers that estimate the lower and upper bounds for the feasible state domain. A numerical example shows the efficiency of the proposed technique.
['B. Yousfi', 'Tarek Raïssi', 'M. Amairi', 'David Gucik-Derigny', 'Mohamed Aoun']
Robust state estimation for singularly perturbed systems
819,562
Smuggling of radioactive materials and special nuclear materials (SNM) present a concern to national and global nuclear security. Due to weak signals, nuclear materials are not easily detected and numerous techniques were developed addressing issues related to illicit trafficking across the borders. In this paper, we present a numerical model for high-Z radiographic-based detection in cargo scanning for a search of hidden weak-signaling nuclear materials in assessing the effect of brightness ratios on accuracy of this method. This numerical model is attempted to present a new direction in improving the accuracy of cargo inspections by distinguishing the high-Z materials such as shielding materials (always the high-Z), or SNM, from all other materials (that could as well be in the group of high-Z) present in an examined volume of a cargo container. Gamma/X-ray radiography is widely established method to scan the interiors of cargo containers. However, processing of such radiographic images employs only 2-D radiography images. This presents a challenging limitation for an accurate discrimination between high-Z to medium-Z to low-Z materials. To address these shortcomings, we developed a new approach (demonstrated using numerical mirroring of the real scanning of cargo containers) based on two orthogonal radiography images in providing brightness and thickness of an observed volume. An empirical formula using these two variables is then derived to estimate what is a density of a scanned material in the interior of a cargo container. In addition to this empirical formula, a discrimination threshold for high-density materials is suggested in possibly providing new aspects of improvements in real scanning systems.
['Sangkyu Lee', 'Tatjana Jevremovic']
Modeling of high-Z materials detection in assessing brightness/density ratios and their impact on detection accuracy
607,869
We study a general class of statistical detection problems where the underlying objective is to detect items belonging to a rare class from a very large database. We propose a computationally efficient method to achieve this goal. Our method consists of two steps. In the first step we estimate the density function of the rare class alone with an adaptive bandwidth kernel density estimator. The adaptive choice of the bandwidth is inspired by the ancient Chinese board game known today as Go. In the second step we adjust this density locally depending on the density of the background class nearby. We show that the amount of adjustment needed in the second step is approximately equal to the adaptive bandwidth from the first step, which gives us additional computational savings. We name the resulting method LAGO, for "locally adjusted Go-kernel density estimator." We then apply LAGO to a real drug discovery dataset and compare its performance with a number of existing and popular methods.
['Mu Zhu', 'Wanhua Su', 'Hugh A. Chipman']
LAGO: A Computationally Efficient Approach for Statistical Detection
443,348
The ability to trace the history of individual products, especially their movement through supply and distribution chains, is key to many solutions such as targeted recalls and counterfeit detection. In most traceability applications a number of independent organizations have to work together. EPCglobal has proposed an architecture for a network of RFID databases where each database provides a standardized query interface. That architecture facilitates simple retrieval of traceability data from individual repositories, but it does not support complex traceability queries or cross-organizational query processing. Theseos (R. Agrawal, 2006) provides traceability applications with the ability to execute complex traceability queries that may span multiple RFID databases.
['Alvin Cheung', 'Karin Kailing', 'Stefan Schönauer']
Theseos: A Query Engine for Traceability across Sovereign, Distributed RFID Databases
543,370
The Polynomial Dimensional Decomposition (PDD) is employed in this work for the global sensitivity analysis and uncertainty quantification (UQ) of stochastic systems subject to a moderate to large number of input random variables. Due to the intimate connection between the PDD and the Analysis of Variance (ANOVA) approaches, PDD is able to provide a simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to the Polynomial Chaos expansion (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of standard methods unaffordable for real engineering applications. In order to address the problem of the curse of dimensionality, this work proposes essentially variance-based adaptive strategies aiming to build a cheap meta-model (i.e. surrogate model) by employing the sparse PDD approach with its coefficients computed by regression. Three levels of adaptivity are carried out in this paper: 1) the truncated dimensionality for ANOVA component functions, 2) the active dimension technique especially for second- and higher-order parameter interactions, and 3) the stepwise regression approach designed to retain only the most influential polynomials in the PDD expansion. During this adaptive procedure featuring stepwise regressions, the surrogate model representation keeps containing few terms, so that the cost to resolve repeatedly the linear systems of the least-squares regression problem is negligible. The size of the finally obtained sparse PDD representation is much smaller than the one of the full expansion, since only significant terms are eventually retained. Consequently, a much smaller number of calls to the deterministic model is required to compute the final PDD coefficients.
['Kunkun Tang', 'Pietro Marco Congedo', 'Rémi Abgrall']
Adaptive surrogate modeling by ANOVA and sparse polynomial dimensional decomposition for global sensitivity analysis in fluid simulation
691,484
In this paper we experiment and analyze the Multipath TCP (MPTCP) proposed by Internet Engineering Task Force (IETF). The authors consider MPTCP offerings such as multipath aggregation, increased throughput, enhanced resilience, network handover and employing various congestion control algorithms over multipaths to aggregate available bandwidth as key factors to assess experiments with various topologies.
['P Anilal', 'B V Sainandan', 'Siva Sankara Sai S', 'Prabhakara Yellai']
Experimentation and analysis of Multipath TCP
907,929
This article explores some imperatives of Knowledge Management for organizational knowledge creation in the era of globalization. As the transformation of Knowledge Management practices of Japanese firms in 1990s shows, Nonaka and TakeuchVs original model of organizational knowledge creation needs to be expanded by incorporating the concept of “community of practice” as the “engine” of knowledge creation. As an attempt for such expansion, it proposes a model of knowledge-creating organization as a self-organizing network of interactive, overlapping, and self-managing communities of practice.
['Takaya Kawamura']
Managing networks of communities of practice for organizational knowledge creation A Knowledge Management imperative in the era of globalization
663,661
['Ivan M. Lessa', 'Glauco de Figueiredo Carneiro', 'Miguel Pessoa Monteiro', 'Fernando Brito e Abreu']
Scaffolding MATLAB and Octave Software Comprehension Through Visualization.
688,336
Computer vision in general, and object proposals in particular, are nowadays strongly influenced by the databases on which researchers evaluate the performance of their algorithms. This paper studies the transition from the Pascal Visual Object Challenge dataset, which has been the benchmark of reference for the last years, to the updated, bigger, and more challenging Microsoft Common Objects in Context. We first review and deeply analyze the new challenges, and opportunities, that this database presents. We then survey the current state of the art in object proposals and evaluate it focusing on how it generalizes to the new dataset. In sight of these results, we propose various lines of research to take advantage of the new benchmark and improve the techniques. We explore one of these lines, which leads to an improvement over the state of the art of +5.2%.
['Jordi Pont-Tuset', 'Luc J. Van Gool']
Boosting Object Proposals: From Pascal to COCO
572,159
This paper proposes a methodology for visual tracking of a dynamic generalized subject within an unknown map, by relying on its perception as a separate entity which can be distinguished spatially and visually from its environment. To this purpose, a 3D-representation of the visible scenery is examined, and the subject is spatially identified by its externally viewed hull via a mesh-connection algorithm aided by visual cues, and visually identified by distinct feature tracking based on an incrementally built list of key-aspects. These two processes operate in closed-loop, and employing a set of assumptions regarding the subject's structural/temporal invariance the tracking health state can be determined. This work additionally presents the framework for the deployment of this scheme for autonomous aerial robotic subject tracking, employing the dynamic subject/environment distinction to obtain knowledge of the environment structure, and collision-free trajectory generation algorithms to achieve mobile tracking.
['Christos Papachristos', 'Dimos Tzoumanikas', 'Anthony Tzes']
Aerial robotic tracking of a generalized mobile target employing visual and spatio-temporal dynamic subject perception
571,879
Real-time reproduction of a 3D human image is realized by an experimental system built for the realization of virtual space teleconferencing, in which participants at different sites can feel as if they are at one site and can work cooperatively. In the teleconferencing system, a 3D model of a participant is constructed by a wire-frame model mapped by color texture and is displayed on 3D screen at the receiving site. In the experimental system, to realize real-time detection of facial features at the sending site, tape marks are attached to facial muscles, and the marks are tracked visually. To detect movements of the head, body, hands and fingers in real-time, magnetic sensors and data glove are used. When the movements of the participant are reproduced at the receiving site, the detected results are used to drive the nodes in the wire frame model. Using the experimental system, the optimum number of nodes for real-time reproduction is obtained. Results for real-time cooperative work using the experimental system are demonstrated. >
['Jun Ohya', 'Yasuichi Kitamura', 'Haruo Takemura', 'Fumio Kishino', 'Nobuyoshi Terashima']
Real-time reproduction of 3D human images in virtual space teleconferencing
64,819
The affordances of learning analytics (LA) are being increasingly harnessed to enhance 21 st century (21C) pedagogy and learning. Relatively rare, however, are use cases and empirically based understandings of students' actual experiences with LA tools and environments at fostering 21C literacies, especially in secondary schooling and Asian education contexts. This paper addresses this knowledge gap by 1) presenting a first iteration design of a computer-supported collaborative critical reading and LA environment and its 16-week implementation in a Singapore high school; and 2) foregrounding students' quantitative and qualitative accounts of the benefits and problematics associated with this learning innovation. We focus the analytic lens on the LA dashboard components that provided visualizations of students' reading achievement, 21C learning dispositions, critical literacy competencies and social learning network positioning within the class. The paper aims to provide insights into the potentialities, paradoxes and pathways forward for designing LA that take into consideration the voices of learners as critical stakeholders.
['Jennifer Pei-Ling Tan', 'Simon Yang', 'Elizabeth Koh', 'Christin Jonathan']
Fostering 21st century literacies through a collaborative critical reading and learning analytics environment: user-perceived benefits and problematics
715,675
In this paper, a direct adaptive fuzzy sliding mode is proposed to design a new robust controller without reaching phase and chattering problems for a multiple-input multiple-output (MIMO) three-tank-system with unknown dynamics and external disturbances. The approach is based on modifying the sliding domain equation through the use of the Mamdani fuzzy logic approaches. The adaptive fuzzy law Takagi-Sugeno (TS) model is used to directly approximate the vector control of the system. Moreover the auxiliary sliding mode control term is incorporated in the control law to attenuate the fuzzy approximation errors and the external disturbances. The stability and robustness of the proposed control scheme are provided. Simulation results are presented which demonstrate the efficiency and robustness of the proposed control scheme.
['El Mehdi Mellouli', 'Ismail Boumhidi']
Direct adaptive fuzzy sliding mode controller without reaching phase for an uncertain three-tank-system
987,301
This paper proposes a novel method for stable grasping and attitude regulation of an object using a multi-fingered hand-arm system. The proposed method is based on a simple sensory-feedback control using the information of an object attitude, and any mathematically complicated computation, such as calculation of inverse dynamics and kinematics, are not required. In addition, the stability of the overall system applied this method is verified. Firstly, nonholonomic rolling constraints between a multi-fingered hand-arm system and an object are formulated. Then, a novel control method for stable grasping and attitude regulation of the grasped object is proposed. It is assumed that information of the attitude of the object is available in real time by external sensors, such as vision, force, tactile sensors, and so on. Next, the stability of the overall system is verified by analyzing the closed-loop dynamics. Finally, it is demonstrated through numerical simulations that our proposed method enables to grasp the object with arbitrary shape, and regulate the attitude of the object stably.
['Akihiro Kawamura', 'Kenji Tahara', 'Ryo Kurazume', 'Tsutomu Hasegawa']
Sensory feedback attitude control for a grasped object by a multi-fingered hand-arm system
368,611
Let P be a set of n points inside a polygonal domain D. A polygonal domain with h holes (or obstacles) consists of h disjoint polygonal obstacles surrounded by a simple polygon which itself acts as an obstacle. We first study t-spanners for the set P with respect to the geodesic distance function d where for any two points p and q, d(p,q) is equal to the Euclidean length of the shortest path from p to q that avoids the obstacles interiors. For a case where the polygonal domain is a simple polygon (i.e., h=0), we construct a (sqrt(10)+eps)-spanner that has O(n log^2 n) edges where eps is the a given positive real number. For a case where there are h holes, our construction gives a (5+eps)-spanner with the size of O(sqrt(h) n log^2 n).#R##N# #R##N#Moreover, we study t-spanners for the visibility graph of P (VG(P), for short) with respect to a hole-free polygonal domain D. The graph VG(P) is not necessarily a complete graph or even connected. In this case, we propose an algorithm that constructs a (3+eps)-spanner of size almost O(n^{4/3}). In addition, we show that there is a set P of n points such that any (3-eps)-spanner of VG(P) must contain almost n^2 edges.
['Mohammad Ali Abam', 'Marjan Adeli', 'Hamid Homapour', 'Pooya Zafar Asadollahpoor']
Geometric Spanners for Points Inside a Polygonal Domain
608,682
Separases are large proteins that mediate sister chromatid disjunction in all eukaryotes. They belong to clan CD of cysteine peptidases and contain a well-conserved C-terminal catalytic protease domain similar to caspases and gingipains. However, unlike other well-characterized groups of clan CD peptidases, there are no high-resolution structures of separases and the details of their regulation and substrate recognition are poorly understood. Here we undertook an in-depth bioinformatical analysis of separases from different species with respect to their similarity in amino acid sequence and protein fold in comparison to caspases, MALT-1 proteins (mucosa-associated lymphoidtissue lymphoma translocation protein 1) and gingipain-R. A comparative model of the single C-terminal caspase-like domain in separase from C. elegans suggests similar binding modes of substrate peptides between these protein subfamilies, and enables differences in substrate specificity of separase proteins to be rationalised. We also modelled a newly identified putative death domain, located N-terminal to the caspase-like domain. The surface features of this domain identify potential sites of protein-protein interactions. Notably, we identified a novel conserved region with the consensus sequence WWxxRxxLD predicted to be exposed on the surface of the death domain, which we termed the WR motif. We envisage that findings from our study will guide structural and functional studies of this important protein family.
['Anja Winter', 'Ralf Schmid', 'Richard Bayliss']
Structural Insights into Separase Architecture and Substrate Recognition through Computational Modelling of Caspase-Like and Death Domains.
174,106
['Masanori Kawakita', 'Jun’ichi Takeuchi']
Barron and Cover's Theory in Supervised Learning and its Application to Lasso
817,745
['Josef Hekrdla', 'Erich-Peter Klement', 'Mirko Navara']
Two Approaches to Fuzzy Propositional Logics.
752,669
The paper addresses the problem of Lur'e sampled-data control design with non uniform sampling. It is shown that this problem can be treated using a methodology based on Euler approximate discrete time models associated to a formulation of the problem in the switched systems framework. Given a finite set of sampling periods, the problem is formulated as a stabilization problem for discrete-time switched Lur'e systems with norm bounded uncertainty. A quadratic criterion is used to take into account possible penalties on the sampling periods. Sufficient conditions are provided to compute both the controller gains and the active sampling period. The approach takes into account the inter-sample behaviour and provides stability guarantees for the exact Lur'e sampled-data system.
['Julien Louis', 'Marc Jungers', 'Jamal Daafouz']
Stabilization of sampled-data Lur'e systems with nonuniform sampling
2,417
Publication of the private set-valued data will provide enormous op- portunities for counting queries and various data mining tasks. Compared to those previous methods based on partition-based privacy models (e.g., k-anonymity), differential privacy provides strong privacy guarantees against adversaries with arbitrary background knowledge. However, the existing solutions based on dif- ferential privacy for data publication are currently limited to static datasets, and do not adequately address today's demand for up-to-date information. In this pa- per, we address the problem of differentially private set-valued data release on an incremental scenario in which the data need to be transformed are not static. Mo- tivated by this, we propose an efficient algorithm, called IncTDPart, to incremen- tally generate a series of differentially private releases. The proposed algorithm is based on top-down partitioning model with the help of item-free taxonomy tree and update-bounded mechanism. Extensive experiments on real datasets confirm that our approach maintains high utility and scalability for counting query.
['Xiaojian Zhang', 'Xiaofeng Meng', 'Rui Chen']
Differentially Private Set-Valued Data Release against Incremental Updates
620,566
This paper presents an original dynamic subsumption technique for Boolean CNF formulae. It exploits simple and sufficient conditions to detect, during conflict analysis, clauses from the formula that can be reduced by subsumption. During the learnt clause derivation, and at each step of the associated resolution process, checks for backward subsumption between the current resolvent and clauses from the original formula are efficiently performed. The resulting method allows the dynamic removal of literals from the original clauses. Experimental results show that the integration of our dynamic subsumption technique within the state-of-the-art SAT solvers Minisat and Rsat particularly benefits to crafted problems.
['Youssef Hamadi', 'Said Jabbour', 'Lakhdar Sais']
LEARNING FOR DYNAMIC SUBSUMPTION
976,679
Published data, whether in traditional publication formats such as research articles or in databases often lack a consensus structure (which slows search and reasoning) and provenance and citation models (which lowers incentive for publication [1]). Furthermore, in some disciplines the growing rate of data production exceeds the capacity of human comprehension. Together, these trends lead to the loss of valuable data from scientic discourse. Nanopublication is a data publication model built on top of existing Semantic Web technologies to counter these data dissemination and management trends [2]. A nanopublication represents the smallest unit of publishable information and consists of an (i) assertion and (ii) provenance [3]. The assertion takes the form of one or more semantic triples (subject-predicate-object combination). The provenance describes how the assertion ‘came to be’, and includes supporting information (e.g., context, parameter settings, a description of methods) and attribution to the authors (of content) and creators (of the nanopublication), institutions supporting the work, funding sources and other information like date and time stamps and certication. Creating a nanopublication requires a one-time eort to model the assertion and provenance as RDF named graphs [3]. After submission to an open, decentralized nanopublication store, the nanopublication will be available both to humans and automated inference and discovery engines. Nanopublications can be used to expose quantitative and qualitative data, experimental data as well as hypotheses, novel or legacy data and negative data that usually goes unpublished. Nanopublications are meant to augment traditional long-form narrative.
['Mark Thompson', 'Erik Schultes', 'Marco Roos', 'Barend Mons']
Data Publishing Using Nanopublications
663,157
In this paper, we consider the problem of designing a topology for deploying a free space optical (FSO) link based network. The problem is to create a topology with strong connectivity and short diameter with uniform degree bounds on each node. Two centralized approaches are presented. The first approach constructs a backbone network by Delaunay triangulation. The basic structure is then refined to meet the design objectives. The second approach called the closest neighbor (CN) algorithm constructs a degree constrained minimum weight spanning tree. The tree is developed into a network with good connectivity and small diameter by forming edges with the closest neighbors. We prove that the CN algorithm forms a connected network. Through simulation and analysis we also show that this approach results in high reliability and small diameter.
['Prabhanjan C. Gurumohan', 'Joseph Y. Hui']
Topology design for free space optical networks
513,037
Communication technologies support virtual RD however, the indirect effects were more consistent in both time periods. The clearest findings were that centrality mediates the effects of functional role, status, and communication role on individual performance. Interestingly, centrality was a stronger direct predictor of performance than the individual characteristics considered in this study. The study illustrates the usefulness of accounting for network effects for better understanding individual performance in virtual groups.
['Manju K. Ahuja', 'Dennis F. Galletta', 'Kathleen M. Carley']
Individual Centrality and Performance in Virtual R&D Groups: An Empirical Study
375,858
efficient development of loosely-coupled and interoperable sets of services. Existing design approaches do not always take full advantage of the value and importance of the engineering invested in existing legacy systems. This paper proposes an approach to define the key services from such legacy systems effectively. The approach focuses on defining these services based on a Model-Driven Architecture approach supported by guidelines over a wide range of possible service types.
['Saad Alahmari', 'David De Roure', 'Ed Zaluska']
A Model-Driven Architecture Approach to the Efficient Identification of Services on Service-Oriented Enterprise Architecture
164,095
The problem of reconstructing digital signals which have been passed through a dispersive channel and corrupted with additive noise is discussed. The problems encountered by linear equalizers under adverse conditions on the signal-to-noise ratio and channel phase are described. By considering the equalization problem as a geometric classification problem the authors demonstrate how these difficulties can be overcome by utilizing nonlinear classifiers as channel equalizers. The manner in which neural networks can be utilized as adaptive channel equalizers is described, and simulation results which suggest that the neural network equalizers offer a performance which exceeds that of the linear structures, particularly in the high-noise environment, are presented. >
['Gavin J. Gibson', 'S. Siu', 'C.F.N. Cowan']
The application of nonlinear structures to the reconstruction of binary signals
353,956
The presence of a large number of available actions in the context of an automated, adaptive decision process can lead to an excessively large search space and thus significantly increase the overhead for the policy learning process. This issue occurs particularly in problem domains such as path planning or grid scheduling where the number of decision points is large and irreducible. The learning algorithm developed in this paper attempts to create a more compact representation of the state and action space by grouping similar actions that are likely leading to very similar future results. Actions are considered similar if they, with high probability, lead to future results with sufficient commonality. This paper develops this action clustering framework within the MDP formalism where actions in any given state are grouped if they result in similar reinforcement feedback based on the past learning experience. The resulting action sets are then considered as a whole in the decision process.
['Po Hsiang Chiu', 'Manfred Huber']
Clustering Similar Actions in Sequential Decision Processes
374,477
['Narges Ahmidi', 'Lingling Tao', 'Shahin Sefati', 'Yixin Gao', 'Colin Lea', 'Benjamín Béjar', 'Luca Zappella', 'Sanjeev Khudanpur', 'René Vidal', 'Gregory D. Hager']
A Dataset and Benchmarks for Segmentation and Recognition of Gestures in Robotic Surgery.
971,745
['Mark J. Schiefsky', 'Malcolm D. Hyman']
Euclid and beyond: towards a long-term history of deductivity.
961,163
This paper addresses the problem of designing and implementing complex control systems for real-time embedded software. Typical applications involve different control laws corresponding to different phases or modes , e.g., take-off, full flight and landing in a fly-by-wire control system. On one hand, existing methods such as the combination of Simulink/Stateflow provide powerful but unsafe mechanisms by means of imperative updates of shared variables. On the other hand, synchronous languages and tools such as Esterel or SCADE/Lustre are too restrictive and forbid to fully separate the specification of modes from their actual instantiation with a particular control automaton. In this paper, we introduce a conservative extension of a synchronous data-flow language close to Lustre, in order to be able to define systems with modes in a more modular way, while insuring the absence of data-races. We show that such a system can be viewed as an object where modes are methods acting on a shared memory. The object is associated to a scheduling policy which specifies the ways methods can be called to build a valid synchronous reaction. We show that the verification of the proper use of an object reduces to a type inference problem using row types introduced by Wand, Remy and Vouillon. We define the semantics of the extended synchronous language and the type system. The proposed extension has been implemented and we illustrate its use through several examples.
['Paul Caspi', 'Jean-Louis Colaço', 'Léonard Gérard', 'Marc Pouzet', 'Pascal Raymond']
Synchronous objects with scheduling policies: introducing safe shared memory in lustre
124,499
In this paper a control strategy for the optimal energy management of a district heating power plant is proposed. The main goal of the control strategy is to reduce the running costs by optimally managing the boilers, the thermal energy storage and the flexible loads while satisfying a time-varying request and operation constraints. The optimization model includes a detailed modeling of boilers operating constraints, energy thermal energy exchange and the operating modes of the power plant layout. Furthermore, the uncertainty in power demand and renewable power output, as well as in weather conditions, is handled by formulating a two-stage stochastic problem and incorporating it into a model predictive control framework. A simulation evaluation based on the real data and the layout of a Finnish power plant is conducted to assess the performance of our proposed framework.
['Francesca Verrilli', 'Alessandra Parisio', 'Luigi Glielmo']
Stochastic model predictive control for optimal energy management of district heating power plants
971,481
This talk proceeds from the premise that IR should engage in a more substantial dialogue with cognitive science. After all, how users decide relevance, or how they chose terms to modify a query are processes rooted in human cognition. Recently, there has been a growing literature applying quantum theory (QT) to model cognitive phenomena ranging from human memory to decision making. Two aspects will be highlighted. The first will show how concept combinations can be modelled in a way analogous to quantum entangled twin-state photons. Details will be presented of cognitive experiments to test for the presence of "entanglement" in cognition via an analysis of bi-ambiguous concept combinations. The second aspect of the talk will show how quantum inference effects currently being used to fit models of human decision making may be applied to model interference between different dimensions of relevance.#R##N##R##N#The underlying theme behind this talk is QT can potentially provide the theoretical basis of new genre of information processing models more aligned with human cognition.
['Peter D. Bruza']
Is there something quantum-like about the human mental lexicon?
554,378
Over the last years, the robotics community has made substantial progress in detection and 3D pose estimation of known and unknown objects. However, the question of how to identify objects based on language descriptions has not been investigated in detail. While the computer vision community recently started to investigate the use of attributes for object recognition, these approaches do not consider the task setting typically observed in robotics, where a combination of colors, shapes, materials might be used in referral language to identify specific objects in a scene. In this paper, we introduce an approach for identifying objects based on natural language containing the attributes of the object. Our experiments show that by using the attributes mentioned in the referral language it is indeed possible to build a learning object detection system that does not require any training images of the target classes.
['Zhe Zhao', 'Jiongkun Xie', 'Xiaoping Chen']
Attribute based object recognition by human language
638,564
We introduce the CAPER project (Collaborative information, Acquisition, Processing, Exploitation and Reporting), partially funded by the European Commission. The goal of CAPER is to create a common platform for the prevention of organized crime through sharing, exploitation and linking of Open and Closed information Sources. CAPER will support collaborative multilingual analysis of unstructured and audiovisual contents, based on Text Mining and Visual Analytics technologies. CAPER will allow Law Enforcement Agencies (LEAs) to share informational, investigative and experiential knowledge.
['Carlo Aliprandi', 'Andrea Marchetti']
Introducing CAPER, a Collaborative Platform for Open and Closed Information Acquisition, Processing and Linking
639,877
Dual-frequency precipitation radar (DPR) on board the GPM (Global Precipitation Measurement) core satellite has reflectivity measurements at two different frequency bands namely, Ku- and Ka- band. Dual-frequency ratio from these measurements has been used to perform rain type classification and melting region detection in the dual-frequency classification module in the current DPR level 2 algorithm. Beyond the applications that have been implemented, in this research, we focus on the enhancement of dual frequency classification module. We introduce and evaluate the algorithms to perform snow/rain separation and multiple scattering detection. These algorithms are candidates for future version of DPR algorithm.
['Minda Le', 'V. Chandrasekar']
Enhancement of dual-frequency classification module for GPM DPR
929,293
Abstract#R##N##R##N#Managing acquisition and development efforts of contracted software is hard work for many organisations which require high-quality products to be produced. Mature supplier processes best work with mature acquisition processes which are able to appropriately plan, track and evaluate the work of the supplier. However, acquirer organisations might not always be that mature, so selecting and managing the best-fit suppliers becomes rather difficult. This paper introduces a study performed to overcome this difficulty for the Ministry of National Education (MONE) of Turkey. The first phase of the study includes defining the evaluation criteria for pre-qualification of software development companies, and the second phase includes defining the basic instructional software development process as well as the standards for the deliverables of the process. The paper also identifies the effort required by the Ministry for evaluating and managing the pre-qualification and development work in practice. Copyright © 2001 John Wiley & Sons, Ltd.
['Onur Demirörs', 'Elif Demirörs', 'Ayca Tarhan']
Managing instructional software acquisition
284,936
It is often desirable to compress or decompress relatively small blocks of data at high bandwidth and low latency (for example, for data fetches across a high speed network). Sequential compression may not satisfy the speed requirement, while simply splitting the block into smaller subblocks for parallel compression yields poor compression performance due to small dictionary sizes. We consider an intermediate approach, where multiple compressors jointly construct a dictionary. The result is parallel speedup, with compression performance similar to the sequential case.
['Peter A. Franaszek', 'John T. Robinson', 'Joy A. Thomas']
Parallel compression with cooperative dictionary construction
47,330
In this paper, we consider the dynamic subchannel allocation problem in OFDMA-based selective relaying networks. Our goal is to maximize the total throughput under the constraint of total transmission power, while guaranteeing fairness on sub-channel occupation by multiple destination nodes (users). Since the optimal solution to this optimization problem is extremely computationally complex to obtain, we decompose the original optimization problem into two subproblems. Firstly, we assign subchannels to the users with the best equivalent channel gains for them under the assumption of equal power distribution among all subchannels. Secondly, water-filling method is adopted to distribute the power to the determined links. Simulation results show that the performance of the proposed algorithm approaches asymptotically to that of the optimal one, while reducing computational complexity from exponential to linear with the numbers of subchannels, relay nodes and users. It is also shown that subchannel permutation (SP) scheme brings additional throughput gain to DF cooperative transmission.
['Hongxing Li', 'Hanwen Luo', 'Xinbing Wang', 'Chisheng Li']
Throughput Maximization for OFDMA Cooperative Relaying Networks with Fair Subchannel Allocation
23,539
Contactless palmprint recognition has recently begun to draw attention of researchers. Different from conventional palmprint images, contactless palmprint images are captured under free conditions and usually have significant variations on translations, rotations, illuminations and even backgrounds. Conventional powerful palmprint recognition methods are not very effective for the recognition of contactless palmprint. It is known that low-rank representation (LRR) is a promizing scheme for subspace clustering, owing to its success in exploring the multiple subspace structures of data. In this paper, we integrate LRR with the adaptive principal line distance for contactless palmprint recognition. The principal lines are the most distinctive features of the palmprint and can be correctly extracted in most cases; thereby, the principal line distances can be used to determine the neighbors of a palmprint image. With the principal line distance penalty, the proposed method effectively improves the clustering results of LRR by improving the weights of the affinities among nearby samples with small principal line distances. Therefore, the weighted affinity graph identified by the proposed method is more discriminative. Extensive experiments show that the proposed method can achieve higher accuracy than both the conventional powerful palmprint recognition methods and the subspace clustering-based methods in contactless palmprint recognition. Also, the proposed method shows promizing robustness to the noisy palmprint images. The effectiveness of the proposed method indicates that using LRR for contactless palmprint recognition is feasible.
['Lunke Fei', 'Yong Xu', 'Bob Zhang', 'Xiaozhao Fang', 'Jie Wen']
Low-rank representation integrated with principal line distance for contactless palmprint recognition
873,025
We describe a recommender system based on dynamically structured holographic memory (DSHM), a cognitive model of associative memory that uses holographic reduced representations as the basis for its encoding of object associations. We compare this recommender to a conventional user-based collaborative filtering algorithm on three datasets: MovieLens, and two bibliographic datasets such as those typically found in a digital library. Off-line experiments show that the holographic recommender is competitive in accuracy for predicting movie preferences and more accurate than collaborative filtering on very sparse data sets. However, DSHM requires significant amounts of computational resources which may require a distributed implementation for it to be practical as a recommender for large data sets.
['Matthew F. Rutledge-Taylor', 'André Vellino', 'Robert L. West']
A holographic associative memory recommender system
156,307
This book outlines the background and overall vision for the Internet of Things (IoT) and Machine-to-Machine (M2M) communications and services, including major standards. Key technologies are described, and include everything from physical instrumentation of devices to the cloud infrastructures used to collect data. Also included is how to derive information and knowledge, and how to integrate it into enterprise processes, as well as system architectures and regulatory requirements. Real-world service use case studies provide the hands-on knowledge needed to successfully develop and implement M2M and IoT technologies sustainably and profitably. Finally, the future vision for M2M technologies is described, including prospective changes in relevant standards. This book is written by experts in the technology and business aspects of Machine-to-Machine and Internet of Things, and who have experience in implementing solutions. Standards included: ETSI M2M, IEEE 802.15.4, 3GPP (GPRS, 3G, 4G), Bluetooth Low Energy/Smart, IETF 6LoWPAN, IETF CoAP, IETF RPL, Power Line Communication, Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE), ZigBee, 802.11, Broadband Forum TR-069, Open Mobile Alliance (OMA) Device Management (DM), ISA100.11a, WirelessHART, M-BUS, Wireless M-BUS, KNX, RFID, Object Management Group (OMG) Business Process Modelling Notation (BPMN)Key technologies for M2M and IoT covered: Embedded systems hardware and software, devices and gateways, capillary and M2M area networks, local and wide area networking, M2M Service Enablement, IoT data management and data warehousing, data analytics and big data, complex event processing and stream analytics, knowledge discovery and management, business process and enterprise integration, Software as a Service and cloud computing Combines both technical explanations together with design features of M2M/IoT and use cases. Together, these descriptions will assist you to develop solutions that will work in the real world Detailed description of the network architectures and technologies that form the basis of M2M and IoT Clear guidelines and examples of M2M and IoT use cases from real-world implementations such as Smart Grid, Smart Buildings, Smart Cities, Participatory Sensing, and Industrial Automation A description of the vision for M2M and its evolution towards IoT
['Jan Höller', 'Vlasios Tsiatsis', 'Catherine N. Mulligan', 'Stefan Avesand', 'Stamatis Karnouskos', 'David Boyle']
From Machine-to-Machine to the Internet of Things: Introduction to a New Age of Intelligence
780,178
Transaction execution in a peer-to-peer database network specifies an update made to a peer's instance is applied to the peer's local database and propagated to related peers. Maintaining a successful execution of a transaction in such a network is challenging due to the dynamic behaviour of peers and unstructured topologies of networks. In this paper, we present a decentralised transaction execution process that guarantees the correct execution of a transaction without relying on any global coordinator. In the network, a peer executes a transaction and provides the local execution information to the initiator of the transaction. The initiator of a transaction plays important roles for the successful execution and termination of a transaction. Transactions originated from different peers may involve in a conflict during their execution in the network. In this paper, we also show a process to resolve conflicts using a universal leader election algorithm, called Mega-Merger.
['Mehedi Masud', 'Sultan Aljahdali']
Concurrent execution of transactions in a peer-to-peer database network
354,279
['Mustafa Al-Lail', 'Ramadan Abdunabi', 'Robert B. France', 'Indrakshi Ray']
An Approach to Analyzing Temporal Properties in UML Class Models.
768,682
In this paper we present a fairly complex example of how the social model for agent conversations based on social commitments we have developed in the past formally supports the implementation of conversations for the Contract Net Protocol.
['Roberto A. Flores', 'Robert C. Kremer']
Formal Conversations for the Contract Net Protocol
546,714
Proposes two classes of constant weight codes, which can be used for correcting t symmetric errors and simultaneously detecting all unidirectional errors. Codes in the first class are in quasi-systematic form and codes in the second class are in systematic form. Since each codeword of codes in both classes can be divided into a data part and a parity check part, the proposed codes have the merit of easily mapping messages into codewords. >
['M. C. Lin']
Constant weight codes for correcting symmetric errors and detecting unidirectional errors
311,641
Motivation: Capillary electrophoresis (CE) of nucleic acids is a workhorse technology underlying high-throughput genome analysis and large-scale chemical mapping for nucleic acid structural inference. Despite the wide availability of CE-based instruments, there remain challenges in leveraging their full power for quantitative analysis of RNA and DNA structure, thermodynamics and kinetics. In particular, the slow rate and poor automation of available analysis tools have bottlenecked a new generation of studies involving hundreds of CE profiles per experiment. Results: We propose a computational method called high-throughput robust analysis for capillary electrophoresis (HiTRACE) to automate the key tasks in large-scale nucleic acid CE analysis, including the profile alignment that has heretofore been a rate-limiting step in the highest throughput experiments. We illustrate the application of HiTRACE on 13 datasets representing 4 different RNAs, 3 chemical modification strategies and up to 480 single mutant variants; the largest datasets each include 87360 bands. By applying a series of robust dynamic programming algorithms, HiTRACE outperforms prior tools in terms of alignment and fitting quality, as assessed by measures including the correlation between quantified band intensities between replicate datasets. Furthermore, while the smallest of these datasets required 7―10 h of manual intervention using prior approaches, HiTRACE quantitation of even the largest datasets herein was achieved in 3-12 min. The HiTRACE method, therefore, resolves a critical barrier to the efficient and accurate analysis of nucleic acid structure in experiments involving tens of thousands of electrophoretic bands. Availability: HiTRACE is freely available for download at http://hitrace.stanford.edu.
['Sungroh Yoon', 'Jinkyu Kim', 'Justine Hum', 'Hanjoo Kim', 'Seunghyun Park', 'Wipapat Kladwang', 'Rhiju Das']
HiTRACE: High-throughput robust analysis for capillary electrophoresis
94,729
This paper reports collective behaviors of multirobot system with simple dynamics and interactions. The model for collective motion is described by fundamental kinetics and the dynamics of the heading which each element has as a degree of freedom. Firstly, we shows the system based on this model realizes various types of behavior according to the set of parameters in the model, which resembles to the behavior of livings such as fish, birds and small insects. Next, we refer to the behavior of the modified model improving the anisotropy of the interaction force. Especially, we discuss the characteristic of linear formation which is obtained by modifying the term of optimal distance between neighbors. Performance of the model is examined by both computer simulation and robot experiments.
['Ken Sugawara', 'Hiroumi Tanigawa', 'Kazuhiro Kosuge', 'Yoshinori Hayakawa', 'Tsuyoshi Mizuguchi', 'Masaki Sano']
Collective Motion and Formation of Simple Interacting Robots
439,454
With the smart grid coming near, wide-area supervisory control and data acquisition (SCADA) becomes more and more important. However, traditional SCADA systems are not suitable for the openness and distribution requirements of smart grid. Distributed SCADA services should be openly composable and secure. Event-driven methodology makes service collaborations more realtime and flexible because of the space, time and control decoupling of event producer and consumer, which gives us an appropriate foundation. Our SCADA services are constructed and integrated based on distributed events in this paper. Unfortunately, an event-driven SCADA service does not know who consumes its events, and consumers do not know who produces the events either. In this environment, a SCADA service cannot directly control access because of anonymous and multicast interactions. In this paper, a distributed security framework is proposed to protect not only service operations but also data contents in smart grid environments. Finally, a security implementation scheme is given for SCADA services.
['Yang Zhang', 'Junliang Chen']
Wide-area SCADA system with distributed security framework
344,900
Exchanging data on noncontiguous user buffers has been a dominant communication pattern in many scientific applications. The OpenSHMEM specification introduces a new set of communication routines to support strided data communication. Most high performance implementations of the OpenSHMEM specification support strided data communication by either packing/unpacking or multiple reads/writes based scheme, which incurs significant performance overhead during communication. This performance overhead could prevent application developers from using OpenSHMEM strided data communication routines. Recently, Mellanox has introduced a novel feature, called User-mode Memory Registration (UMR), for noncontiguous data transfer. UMR has the potential to support efficient OpenSHMEM strided data communication. In this paper, we propose UMR-based schemes to support one-sided zero-copy strided data communication for OpenSHMEM. To the best of our knowledge, this is the first paper to design OpenSHMEM strided data communication using the UMR feature. We propose and implement UMR-based designs on top of MVAPICH2-X. Experimental results with shmem iget operation show 3X performance improvement over the multiple reads scheme in default MVAPICH2-X, and 20X performance improvement over the OpenSHMEM reference implementation configured with GASNet. At the application level, for a 3D stencil communication kernel with OpenSHMEM iget routines on 512 processes, the proposed UMR-based design outperforms the multiple reads scheme in default MVAPICH2-X by 20% in total execution time.
['Mingzhe Li', 'Khaled Hamidouche', 'Xiaoyi Lu', 'Jie Zhang', 'Jian Lin', 'Dhabaleswar K. Panda']
High Performance OpenSHMEM Strided Communication Support with InfiniBand UMR
649,861
This paper presents a new technique for modelling object classes (such as faces) and matching the model to novel images from the object class. The technique can be used for a variety of image analysis applications including face recognition, object verification and facial expression analysis. The model, called a hierarchical morphable model, is "learned" from example images (partioned into components) and their correspondences. This is an extension to the work on morphable models described in previous papers. Hierarchical morphable models are shown to find good matches to novel face images and are also robust to partial occlusion.
['Michael J. Jones', 'Tomaso Poggio']
Hierarchical morphable models
264,458
['Brian Coats', 'Subrata Acharya']
Achieving Electronic Health Record Access from the Cloud
579,905
This paper concerns the problem of constructing a minimum spanning tree (MST) in a synchronous distributed network with n nodes, where each node knows only the identities of itself and its neighbors. We assume the CONGEST model where messages are of size O(log n) bits. Spanning tree construction was long believed to require an amount of communication linear in the number of edges. In 2015, King, Kutten and Thorup presented a Monte Carlo algorithm which broke this communication bound. In particular it showed that an MST could be constructed with time and message complexity O(n log2 n/log log n), independent of the number of edges. Here we give trade-offs between time and communication. Our Monte Carlo algorithm runs in O(n/e) time and O(n1+e/e log log n) messages for any 1 > e ≥ log log n log n. For the spanning tree problem, we show a time bound of O(n) and a communication bound of O(n log n log log n) messages. We also provide the first algorithm that constructs an MST in time proportional to the diameter of the MST up to a logarithmic factor with o(m) communication.
['Ali Mashreghi', 'Valerie King']
Time-communication trade-offs for minimum spanning tree construction
972,281
Automated Facial Expression Recognition (FER) has remained a challenging and interesting problem in computer vision. Despite efforts made in developing various methods for FER, existing approaches lack generalizability when applied to unseen images or those that are captured in wild setting (i.e. the results are not significant). Most of the existing approaches are based on engineered features (e.g. HOG, LBPH, and Gabor) where the classifier's hyper-parameters are tuned to give best recognition accuracies across a single database, or a small collection of similar databases. This paper proposes a deep neural network architecture to address the FER problem across multiple well-known standard face datasets. Specifically, our network consists of two convolutional layers each followed by max pooling and then four Inception layers. The network is a single component architecture that takes registered facial images as the input and classifies them into either of the six basic or the neutral expressions. We conducted comprehensive experiments on seven publicly available facial expression databases, viz. MultiPIE, MMI, CK+, DISFA, FERA, SFEW, and FER2013. The results of our proposed architecture are comparable to or better than the state-of-the-art methods and better than traditional convolutional neural networks in both accuracy and training time.
['Ali Mollahosseini', 'David Chan', 'Mohammad H. Mahoor']
Going deeper in facial expression recognition using deep neural networks
605,661
The paper addresses the design of the polyphase IIR filters based on the N/sup th/-order single-coefficient allpass sub-filters in the constraint coefficient space using the Constraint Downhill Simplex Algorithm (CDSA). incorporating the bit-flipping algorithm in its core engine allowed the optimisation routine to converge to better target designs without affecting the high speed of the original algorithm. Establishing the boundaries of the search space required by the Downhill Simplex Algorithm (DSA) for the two-path polyphase IIR filter is also presented.
['Artur Krukowski', 'Izzet Kale']
Constraint two-path polyphase IIR filter design using downhill simplex algorithm
476,469
The Component Agent Framework for domain-Experts (CAFnE) toolkit is an extension to the Prometheus Design Tool (PDT). It uses the detailed design produced by PDT with further annotations by domain experts to automatically generate executable code into a desired agent platform. The key feature of CAFnE is that it allows domain experts with limited programming skills to easily build and modify agent systems.
['Gaya Buddhinath Jayatilleke', 'John Thangarajah', 'Lin Padgham', 'Michael Winikoff']
Component Agent Framework for domain-Experts (CAFnE) toolkit
174,071
This paper presents an extended adjoint decoupling method to conduct the digital decoupling controller design for the continuous-time transfer function matrices with multiple (integer/fractional) time delays in both the denominator and the numerator matrix. First, based on the sampled unit-step response data of the afore-mentioned multiple time-delay system, the conventional balanced model-reduction method is utilised to construct an approximated discrete-time model of the original (known/unknown) multiple time-delay continuous-time transfer function matrix. Then, a digital decoupling controller is designed by utilising the extended adjoint decoupling method together with the conventional discrete-time root-locus method. An illustrative example is given to demonstrate the effectiveness of the proposed method.
['Linbo Xie', 'C.-Y. Wu', 'Leang-San Shieh', 'Jason Sheng Hong Tsai']
Digital decoupling controller design for multiple time-delay continuous-time transfer function matrices
454,566
While many data mining models concentrate on automation and efficiency, interactive data mining models focus on adaptive and effective communications between human users and computer systems. User views, preferences, strategies and judgements play the most important roles in human-machine interactivities, guide the selection of target knowledge representations, operations, and measurements. Practically, user views, preferences and judgements also decide strategies of abnormal situation handling, and explanations of mined patterns. In this paper, we discuss these fundamental issues.
['Yan Zhao', 'Yaohua Chen', 'Yiyu Yao']
User-centered Interactive Data Mining
515,191
In this paper, a genetic algorithm (GA)-based optimisation technique for controllers of two actuator-based levitation system has been discussed. GA has a highly proven track record of optimisation of parameters for different types of control schemes. Any electromagnetic levitation system (EMLS) is inherently unstable and strongly non-linear in nature. Controllers based on linear model and designed by classical approach for any EMLS have restricted zone of operation. For a small variation of operating air-gap, there is sharp degradation of controller performance. But it is essential to design an optimised controller that will stabilise unstable EMLS and will provide satisfactory performance for a wide range of operating air-gap. This paper focuses mainly on an optimal control of a proposed two actuator-based EMLS scheme, which is composed of a stochastic technique based on GA.
['Rupam Bhaduri', 'Subrata Banerjee']
Optimisation of controller parameters by genetic algorithm for an electromagnetic levitation system
270,433
We consider a network of n sender/receiver pairs placed randomly in a region of unit area. Network capacity or maximum throughput is defined as the highest rate that can be achieved by each sender/receiver pair over a long period of time. It is known that without using relays (i.e., via only direct communication), the maximum throughput is less than O(1), that is, strictly decays as n increases. The network capacity without relaying for static or mobile networks is not known. However, a known lower bound on this capacity if O[(log (n))/n]. Our goal is to find a higher achievable rate. We show, by demonstrating a simple coding and scheduling scheme that uses mobility, that O[(log (n))/(n/sup 1/-/spl beta/)] is achievable, where /spl beta/ > 0 is a constant that depends on the power attenuation factor in the wireless medium. For example, when power decays as d/sup -4/ with distance d, O[(log (n))/(n/sup .25/)] is achievable. We assume channels to be AWGN interference channels throughput this work.
['E. Uysal-Biyikoglu', 'Abtin Keshavarzian']
Throughput achievable with no relaying in a mobile interference network
393,498
Composite web services can be orchestrated in a decentralized manner by breaking down the original service specification into a set of partitions and executing them on a distributed infrastructure. The infrastructure consists of multiple service engines communicating with each other over asynchronous messaging. Decentralized orchestration yields performance benefits by exploiting concurrency and reducing the data on the network. Further, decentralized orchestration may be necessary to orchestrate certain composite web services due to privacy and data flow constraints. However, decentralized orchestration also results in additional complexity due to absence of a centralized global state, and overlapping or different life cycles of the various partitions. This makes handling of faults arising from composite service partitions or from the failure of component web services, a challenging task. In this paper we propose a mechanism for handling faults in decentralized orchestration of composite web services. The mechanism includes a strategy for placement of fault handlers and compensation handlers, and schemes for fault propagation and fault recovery. The mechanism is designed to maintain the semantics of the original specification while ensuring minimal overheads.
['Girish Chafle', 'Sunil Chandra', 'Pankaj Kankar', 'Vijay Mann']
Handling faults in decentralized orchestration of composite web services
837,658
The central issue of explicit rate control for available bit rate (ABR) service in ATM networks is the computation of fair rate for every connection. In this paper, we propose a new fair-rate allocation algorithm called fast max-min rate allocation (FMMRA) for ATM switches supporting ABR services. The FMMRA algorithm provides the means to compute the max-min fair rates with O(1) computational complexity. This exact calculation of fair rates expedites quick convergence to max-min fair shares, and offers excellent transient response. At the steady state, the algorithm operates without causing any oscillations in rates. The FMMRA algorithm does not require any parameter tuning and proves to be very robust in a large ATM network. Some simulation results are provided to show the effectiveness of the algorithm.
['Ambalavanar Arulambalam', 'Xiaoqiang Chen', 'Nirwan Ansari']
An intelligent explicit rate control algorithm for ABR service in ATM networks
433,392
In This work we propose a modified GA that assigns a unique mutation rate to each gene based on the contribution of the respective gene's contribution to the fitness of the individual. Although the proposed model is not "parameter free", through a number of experiments, we show that the parameters for this model are significantly insensitive to the landscape of the problems compared with the mutation rate in conventional GA, implying that this model could deal effectively with a wide range of problems the requirement to set the mutation rate empirically.
['Pitoyo Hartono', 'Shuji Hashimoto', 'Mattias Wahde']
Labeled-GA with adaptive mutation rate
68,097
The proliferation of online sensitive data about individuals and organizations makes concern about the privacy of these data a top priority. There have been many formulations of privacy and, unfortunately, many negative results about the feasibility of maintaining privacy of sensitive data in realistic networked environments. We formulate communication-complexity-based definitions, both worst case and average case, of a problem’s privacy-approximation ratio . We use our definitions to investigate the extent to which approximate privacy is achievable in a number of standard problems: the 2 nd -price Vickrey auction, Yao’s millionaires problem, the public-good problem, and the set-theoretic disjointness and intersection problems. For both the 2 nd -price Vickrey auction and the millionaires problem, we show that not only is perfect privacy impossible or infeasibly costly to achieve, but even close approximations of perfect privacy suffer from the same lower bounds. By contrast, if the inputs are drawn uniformly at random from { 0,…, 2 k -1}, then, for both problems, simple and natural communication protocols have privacy-approximation ratios that are linear in k (i.e., logarithmic in the size of the input space). We also demonstrate tradeoffs between privacy and communication in a family of auction protocols. We show that the privacy-approximation ratio provided by any protocol for the disjointness and intersection problems is necessarily exponential (in k ). We also use these ratios to argue that one protocol for each of these problems is significantly fairer than the others we consider (in the sense of relative effects on the privacy of the different players).
['Joan Feigenbaum', 'Aaron D. Jaggard', 'Michael Schapira']
Approximate Privacy: Foundations and Quantification
200,569
Biomedical research is becoming increasingly data driven, analytical and hence digital. In recognition of this evolution NIH has established the Office for Data Science with trans NIH responsibility for maximizing the value of this digital enterprise. This effort brings together communities, policy changes and new infrastructure to be applied to existing and new areas of research such as precision medicine. We will review these changes from the perspective of research advances that are underway and highlight how this community can further engage in these activities.
['Philip E. Bourne']
Big data in biomedicine — An NIH perspective
580,336
Our strategy for TREC KBA CCR track is to rst retrieve as many vital or documents as possible and then apply more sophisticated classication and ranking methods to dierentiate vital from useful documents. We submitted 10 runs generated by 3 approaches: question expansion, classication and learning to rank. Query expansion is an unsupervised baseline, in which we combine entities’ names and their related entities’ names as phrase queries to retrieve relevant documents. This baseline outperforms the overall median and mean submissions. The system performance is further improved by supervised classication and learning to
['Jingang Wang', 'Dandan Song', 'Lejian Liao', 'Chin-Yew Lin']
BIT and MSRA at TREC KBA CCR Track 2013
679,182
Spatial crowdsourcing (a.k.a mobile crowdsourcing) is a new paradigm of data collection, which has been emerged in the last few years to enable workers to perform tasks in the physical world. The objective of spatial crowdsourcing is to outsource a set of location-specific tasks to a set of workers, in which the workers are required to physically be at the task locations to complete them, i.e., taking pictures or collecting air quality information at specified locations of interest. In this paper, we discuss the unique challenges of spatial crowdsourcing: task assignment, incentive mechanism, worker's location privacy and the absence of real-world datasets. Thereafter, we present our current approaches to those issues.
['Hien To']
Task assignment in spatial crowdsourcing: challenges and approaches
955,369
['Arun Nemani', 'Woojin Ahn', 'Denise W. Gee', 'Xavier Intes', 'Steven D. Schwaitzberg', 'Meryem A. Yücel', 'Suvranu De']
Objective Surgical Skill Differentiation for Physical and Virtual Surgical Trainers via Functional Near-Infrared Spectroscopy.
812,519
We describe improved mechanisms to accurately classify days when news for topics receive unexpectedly high amount of coverage. We further investigate the factors which influence this classification using ‘Presidential Elections’ as the topic of interest. This helps in bringing out useful trends and relations between days with hot topics by varying variables like history window size,van-ratio etc. We also propose a statistical scheme to approximate major events related to the topic. We then try to approximate the chain of events related to the major events. This can support a news alert service and also serve the purpose of automatically tracking news which follow up major events.
['Raghvendra Mall', 'Neeraj Bagdia', 'Vikram Pudi']
Variations and Trends in Hot Topics in News Feeds
558,452
In a geometric bottleneck shortest path problem, we are given a set S of n points in the plane, and want to answer queries of the following type: Given two points p and q of S and a real number L, compute (or approximate) a shortest path in the subgraph of the complete graph on S consisting of all edges whose length is less than or equal to L. We present efficient algorithms for answering several query problems of this type. Our solutions are based on minimum spanning trees, spanners, the Delaunay triangulation, and planar separators.
['Prosenjit Bose', 'Anil Maheshwari', 'Giri Narasimhan', 'Michiel H. M. Smid', 'Norbert Zeh']
Approximating geometric bottleneck shortest paths
963,202
We describe the quadratic sieve factoring algorithm and a pipeline architecture on which it could be efficiently implemented. Such a device would be of moderate cost to build and would be able to factor 100-digit numbers in less than a month. This represents an order of magnitude speed-up over current implementations on supercomputers. Using a distributed network ofmany such devices, it is predicted much larger numbers could be practically factored. 1. Introduction. The problem of efficiently factoring large composite numbers has been of interest for centuries. It shares with many other basic problems in the sciences the twin attributes of being easy to state, yet (so far) difficult to solve. In recent years, it has also become an applied science. In fact, several new public-key cryptosystems and signature schemes, including the RSA public-key cryptosystem (10), base their security on the supposed intractability of the factoring problem. Although there is no known polynomial time algorithm for factoring, we do have subexponential algorithms. Over the last few years there has developed a remarkable six-way tie for the asymptotically fastest factoring algorithms. These methods all have the common running time
['Carl Pomerance', 'J.W. Smith', 'Randy Tuler']
A pipeline architecture for factoring large integers with the quadratic sieve algorithm
442,019
['David Escudero Mancebo', 'Eva Estebas-Vilaplana']
Visualizing tool for evaluating inter-label similarity in prosodic labeling experiments.
796,114
['Gergely Zachár', 'Gyula Simon']
Towards distortion-tolerant radio-interferometric object tracking
722,504
Abstract Although cognitive behavioral therapy (CBT) has been demonstrated to be the most effective approach for the treatment of bulimia nervosa (BN), there is lack of studies showing whether a combination with a serious video game (SVG) might be useful to enhance patients' emotional regulation capacities and general outcome. The aims of this study were (a) to analyze whether outpatient CBT + SVG, when compared with outpatient CBT − SVG, shows better short-term outcome; (b) to examine whether the CBT + SVG group is more effective in reducing emotional expression and levels of anxiety than CBT − SVG. Thirty-eight patients diagnosed as having BN according to DSM-5 criteria were consecutively assigned to two outpatient group therapy conditions (that lasted for 16 weekly sessions): 20 CBT + SVG versus 18 CBT − SVG. Patients were assessed before and after treatment using not only a food and binging/purging diary and clinical questionnaires in the field of eating disorders but also additional indexes for measu...
['Fernando Fernández-Aranda', 'Susana Jiménez-Murcia', 'Juan José Santamaría', 'Cristina Giner-Bartolomé', 'Gemma Mestre-Bach', 'Roser Granero', 'Isabel Sánchez', 'Zaida Agüera', 'Maher Ben Moussa', 'Nadia Magnenat-Thalmann', 'Dimitri Konstantas', 'Tony Lam', 'Mikkel Lucas', 'Jeppe Nielsen', 'Peter Lems', 'Salomé Tárrega', 'José M. Menchón']
The Use of Videogames as Complementary Therapeutic Tool for Cognitive Behavioral Therapy in Bulimia Nervosa Patients
551,495
This paper describes a new method for generating 2-D watermarks with good synchronizability and secrecy properties. The aim of the work is primarily to develop a watermarking scheme that efficiently and securely deals with cropping and object segmentation. It differs from other existing schemes by its innovative pattern generation algorithm. On the basis of a 1-D sequence, we propose an original method to generate a redundant 2-D pattern with cyclic properties depending on a secret key. The cyclic property facilitates the synchronization for the detection of the watermark when the reference marks are lost. Possible applications fields of this technique are object oriented scene authentication as well as classical intellectual property rights issues.
['Damien Delannay', 'Benoît Macq']
Generalized 2-D cyclic patterns for secret watermark generation
291,255
Due to its numerous application fields and benefits, virtualization has become an interesting and attractive topic in computer and mobile systems, as it promises advantages for security and cost efficiency. However, it may bring additional performance overhead. Recently, CPU virtualization has become more popular for embedded platforms, where the performance overhead is especially critical. In this article, we present the measurements of the performance overhead of the two hypervisors Xen and Jailhouse on ARM processors in the context of the heavy load “Cpuburn-a8” application and compare it to a native Linux system running on ARM processors.
['Sebouh Toumassian', 'Rico Werner', 'Axel Sikora']
Performance measurements for hypervisors on embedded ARM processors
932,649