abstract
stringlengths 0
11.1k
| authors
stringlengths 9
1.96k
| title
stringlengths 4
353
| __index_level_0__
int64 3
1,000k
|
---|---|---|---|
Face recognition algorithms mainly differ in how to represent the probe face image using the training data. As the state-of-the-art face recognition algorithm, linear regression computes a reconstruction matrix from the images of each subject and then approximates the probe face image using the reconstruction matrix. However, the performance of this linear algorithm is limited due to the nonlinear structure of the face images which is caused by variations in illumination, expression, pose and occlusion. To overcome the problem, in this paper we propose a kernel-based nonlinear regression algorithm for effective face recognition. Because of the high (even infinite) dimensionality of the nonlinear transformation function, it is infeasible to directly calculate the corresponding reconstruction matrix and therefore is unable to explicitly approximate the probe image. With the help of kernel trick, we tackle this difficulty by embedding the nonlinear regression in the stage of computing the distance between the probe image and the approximated probe image. The proposed nonlinear regression classification algorithm is evaluated on several popular standard databases under a number of classical evaluation protocols that have been reported in the face recognition literature. A comparative study with linear regression classification approach and several other algorithms shows the superiority of the proposed approach. | ['Lin He', 'Jing Pan'] | Face Recognition Using Nonlinear Regression | 120,800 |
Mobile devices like smartphones together with social networks enable people to generate, share, and consume enormous amounts of media content. Common search operations, for example searching for a music clip based on artist name and song title on video platforms such as YouTube, can be achieved both based on potentially shallow humangenerated metadata, or based on more profound content analysis, driven by Optical Character Recognition (OCR) or Automatic Speech Recognition (ASR). However, more advanced use cases, such as summaries or compilations of several pieces of media content covering a certain event, are hard, if not impossible to fulfill at large scale. One example of such event can be a keynote speech held at a conference, where, given a stable network connection, media content is published on social networks while the event is still going on.#R##N##R##N#In our thesis, we develop a framework for media content processing, leveraging social networks, utilizing the Web of Data and fine-grained media content addressing schemes like Media Fragments URIs to provide a scalable and sophisticated solution to realize the above use cases: media content summaries and compilations. We evaluate our approach on the entity level against social media platform APIs in conjunction with Linked (Open) Data sources, comparing the current manual approaches against our semi-automated approach. Our proposed framework can be used as an extension for existing video platforms. | ['Thomas Steiner'] | DC proposal: enriching unstructured media content about events to enable semi-automated summaries, compilations, and improved search by leveraging social networks | 568,749 |
Specialized open access digital collections contain a wealth of valuable resources. However, major academic and research libraries do not always provide access to them, and thus do not benefit from these unique resources. This case study of one such digital collection, Bioline International, surveys 76 academic libraries in Canada and the United States to determine how often libraries are linking to the collection. A follow-up questionnaire was sent to librarians at the surveyed institutions to determine their opinions about the use of open access journals. The findings suggest issues of poor adoption rates of open access journals, as well as some reasons why such journals may not be actively adopted. | ['Jen Sweezie', 'Nadia Caidi', 'Leslie Chan'] | The Inclusion of Open Access Journals in Academic Libraries: A Case Study of Bioline International | 260,865 |
In this paper we adopt a simulation approach to study the performance of the BitTorrent protocol in terms of the entropy that qualifies a torrent and the structure of the overlay used to distribute the content. We find that the entropy of a torrent, defined as the diversity that characterizes the distribution of pieces of the content, plays an important role for the system to achieve optimal performance. We then relate the performance of BitTorrent with the characteristics of the distribution overlay built by the peers taking part in the torrent. Our results show that the number of connections a given peer maintains with other peers and the fraction of those connections initiated by the peer itself are key factors to sustain a high entropy, hence an optimal system performance. Those results were obtained for a realistic choice of torrent sizes and system parameters, under the assumption of a flash-crowd peer arrival pattern. | ['Guillaume Urvoy-Keller', 'Pietro Michiardi'] | Impact of Inner Parameters and Overlay Structure on the Performance of BitTorrent | 399,120 |
We provide a comprehensive framework for semantic GSM artifacts, discuss in detail its properties, and present main software engineering architectures it is able to capture. The distinguishing aspect of our framework is that it allows for expressing both the data and the lifecycle schema of GSM artifacts in terms of an ontology, i.e., a shared and formalized conceptualization of the domain of interest. To guide the modeling of data and lifecycle we provide an upper ontology, which is specialized in each artifact with specific lifecycle elements, relations, and business objects. The framework thus obtained allows to achieve several advantages. On the one hand, it makes the specification of conditions on data and artifact status attribute fully declarative and enables semantic reasoning over them. On the other, it fosters the monitoring of artifacts and the interoperation and cooperation among different artifact systems. To fully achieve such an interoperation, we enrich our framework by enabling the linkage of the ontology to autonomous database systems through the use of mappings. We then discuss two scenarios of practical interest that show how mappings can be used in the presence of multiple systems. For one of these scenarios we also describe a concrete instantiation of the framework and its application to a real-world use case in the energy domain, investigated in the context of the EU project ACSI. | ['Riccardo De Masellis', 'Domenico Lembo', 'Marco Montali', 'Dmitry Solomakhin'] | Semantic Enrichment of GSM-Based Artifact-Centric Models | 1,089 |
During the past few decades, remarkable progress has been made in solving pattern recognition problems using networks of spiking neurons. However, the issue of pattern recognition involving computational process from sensory encoding to synaptic learning remains underexplored, as most existing models or algorithms target only part of the computational process. Furthermore, many learning algorithms proposed in the literature neglect or pay little attention to sensory information encoding, which makes them incompatible with neural-realistic sensory signals encoded from real-world stimuli. By treating sensory coding and learning as a systematic process, we attempt to build an integrated model based on spiking neural networks SNNs, which performs sensory neural encoding and supervised learning with precisely timed sequences of spikes. With emerging evidence of precise spike-timing neural activities, the view that information is represented by explicit firing times of action potentials rather than mean firing rates has been receiving increasing attention. The external sensory stimulation is first converted into spatiotemporal patterns using a latency-phase encoding method and subsequently transmitted to the consecutive network for learning. Spiking neurons are trained to reproduce target signals encoded with precisely timed spikes. We show that when a supervised spike-timing-based learning is used, different spatiotemporal patterns are recognized by different spike patterns with a high time precision in milliseconds. | ['Jun Hu', 'Huajin Tang', 'Kay Chen Tan', 'Haizhou Li', 'Luping Shi'] | A spike-timing-based integrated model for pattern recognition | 407,975 |
In this paper we report on the evaluation of volumetric shape reconstruction methods that consider as input implicit forms in 3D. Many visual applications build implicit representations of shapes that are converted into explicit shape representations using geometric tools such as the Marching Cubes algorithm. This is the case with image based reconstructions that produce point clouds from which implicit functions are computed, with for instance a Poisson reconstruction approach. While the Marching Cubes method is a versatile solution with proven efficiency, alternative solutions exist with different and complementary properties that are of interest for shape modeling. In this paper, we propose a novel strategy that builds on Centroidal Voronoi Tessellations (CVTs). These tessellations provide volumetric and surface representations with strong regularities in addition to provably more accurate approximations of the implicit forms considered. In order to compare the existing strategies, we present an extensive evaluation that analyzes various properties of the main strategies for implicit to explicit volumetric conversions: Marching cubes, Delaunay refinement and CVTs, including accuracy and shape quality of the resulting shape mesh. | ['Li Wang', 'Franck Hétroy-Wheeler', 'Edmond Boyer'] | On Volumetric Shape Reconstruction from Implicit Forms | 887,689 |
We describe here a method for building a support vector machine (SVM) with integer parameters. Our method is based on a branch-and-bound procedure, derived from modern mixed integer quadratic programming solvers, and is useful for implementing the feed-forward phase of the SVM in fixed-point arithmetic. This allows the implementation of the SVM algorithm on resource-limited hardware like, for example, computing devices used for building sensor networks, where floating-point units are rarely available. The experimental results on well-known benchmarking data sets and a real-world people-detection application show the effectiveness of our approach. | ['Davide Anguita', 'Alessandro Ghio', 'Stefano Pischiutta', 'Sandro Ridella'] | A support vector machine with integer parameters | 151,677 |
As distributed systems become more complex, understanding the underlying algorithms that make these systems work becomes even harder. Traditional learning modalities based on didactic teaching and theoretical proofs alone are no longer sufficient for a holistic understanding of these algorithms. Instead, an environment that promotes an immersive, hands-on learning of distributed system algorithms is needed to complement existing teaching modalities. Such an environment must be flexible to support learning of a variety of algorithms. Moreover, since many of these algorithms share several common traits with each other while differing only in some aspects, the environment should support extensibility and reuse. Finally, it must also allow students to experiment with large-scale deployments in a variety of operating environments. To address these concerns, we use the principles of software product lines (SPLs) and model-driven engineering and adopt the cloud platform to design an immersive learning environment called the Playground of Algorithms for Distributed Systems (PADS). The research contributions in PADS include the underlying feature model, the design of a domainspecific modeling language that supports the feature model, and the generative capabilities that maximally automate the synthesis of experiments on cloud platforms. A prototype implementation of PADS is described to showcase a distributed systems algorithm illustrating a peer to peer file transfer algorithm based on BitTorrent, which shows the benefits of rapid deployment of the distributed systems algorithm. | ['Yogesh D. Barve', 'Prithviraj Patil', 'Aniruddha Gokhale'] | A Cloud-Based Immersive Learning Environment for Distributed Systems Algorithms | 868,259 |
Monadic query languages over trees currently receive considerable interest in the database community, as the problem of selecting nodes from a tree is the most basic and widespread database query problem in the context of XML. Partly a survey of recent work done by the authors and their group on logical query languages for this problem and their expressiveness, this paper provides a number of new results related to the complexity of such languages over so-called axis relations (such as "child" or "descendant") which are motivated by their presence in the XPath standard or by their utility for data extraction (wrapping). | ['Georg Gottlob', 'Christoph Koch'] | Monadic queries over tree-structured data | 42,883 |
The incorporation of learning into commercial games can enrich the player experience, but may concern developers in terms of issues such as losing control of their game world. We explore a number of applied research and some fielded applications that point to the tremendous possibilities of machine learning research including game genres such as real-time strategy games, flight simulation games, car and motorcycle racing games, board games such as Go, an even traditional#R##N#game-theoretic problems such as the prisoners dilemma. A common trait of these works is the potential of machine learning to reduce the burden of game developers. However a number of challenges exists that hinder the use of machine learning more broadly. We discuss some of these challenges while at the same time exploring opportunities for a wide use of machine learning in games. | ['Héctor Muñoz-Avila', 'Christian Bauckhage', 'Michal Bída', 'Clare Bates Congdon', 'Graham Kendall'] | Learning and Game AI | 600,804 |
The coded error probability for direct sequence (DS) BPSK or QPSK spread-spectrum systems, used with or without interleaving, and operating in the presence of pulsed multiple-tone interference, is investigated. We consider the worst-case channel error probability under conditions of pulsed multiple-tone jamming for the various systems, and present coded error probability as a function of interleaving delay for (n, k) block codes with hard-decision decoding. It is shown that when the maximum level of continuous multiple-tone interference does not exceed the level of the desired signal, the duty cycle of the corresponding pulsed interference which yields the worst-case error probability is usually small, and that the smaller the duty factor of the jamming, the more considerable is the performance improvement due to the use of interleaving in conjunction with the coding. | ['Rui-Hua Dou', 'Laurence B. Milstein'] | Coded Error Probability for DS Spread-Spectrum Systems with Periodic Pulsed Multiple-Tone Interference | 310,744 |
Wood density (ρρ) is an indicator of Douglas-fir (Pseudotsuga menziesii ) forest product performance and past and present tree ecophysiology. Models describing the spatial variation in this wood property will require a considerable sampling effort. Medical X-ray computed tomography (CT) has been identified as one technology for rapidly estimating ρρ of Douglas-fir wood. The density of Douglas-fir can be predicted from CT Hounsfield units through a linear relationship (R2:96%R2:96%). The moisture content of wood samples has an additional linear effect on estimating Douglas-fir wood density (0.0015g/cm3) and also has a practically minor (2.8E-06g/cm3), but significant, interactive relationship with CT Hounsfield units. While the effect of moisture content explains only a small percentage of the variance in ρρ, accounting for this effect my be important to avoid prediction biases. Finally, X-ray tube current (mA) may also impose a small effect (0.00003g/cm3) on estimating wood density. In contrast to other factors, the filtered back-projection algorithm used to produce CT scanning images does not have an strong effect on estimating ρρ. While it is important to account for scanner settings and moisture content, 74% of the variance in predicting ρρ can be explained by CT Hounsfield units with 21% explained by accounting for moisture content and X-ray tube current. Independent estimates of wood sample volume for validation can be achieved in several ways, each with possible systematic biases. This experiment found volume of wood samples conditioned to different moisture content could be estimated similarly using volumetric displacement or dimension measurement by caliper. The absolute mean deviance of estimated sample volume from caliper measurement relative to volumetric displacement was 0.45cm3 or 2.6%. CT scanning can be used to rapidly estimate Douglas-fir at a resolution of 1-mm using unprepared samples. | ['Nathaniel Lee Osborne', 'Olav Høibø', 'Douglas A. Maguire'] | Estimating the density of coast Douglas-fir wood samples at different moisture contents using medical X-ray computed tomography | 809,184 |
Aging is associated with declines in cognitive performance and multiple changes in the brain, including reduced default mode functional connectivity (FC). However, conflicting results have been reported regarding age differences in FC between hippocampal and default mode regions. This discrepancy may stem from the variation in selection of hippocampal regions. We therefore examined the effect of age on resting state FC of anterior and posterior hippocampal regions in an adult life-span sample. Advanced age was associated with lower FC between the posterior hippocampus and three regions: the posterior cingulate cortex, medial prefrontal cortex, and lateral parietal cortex. In addition, age-related reductions of FC between the left and right posterior hippocampus, and bilaterally along the posterior to anterior hippocampal axis were noted. Age differences in medial prefrontal and inter-hemispheric FC significantly differed between anterior and posterior hippocampus. Older age was associated with lower performance in all cognitive domains, but we observed no associations between FC and cognitive performance after controlling for age. We observed a significant effect of gender and a linear effect of COMT val158met polymorphism on hippocampal FC. Females showed higher FC of anterior and posterior hippocampus and medial prefrontal cortex than males, and the dose of val allele was associated with lower posterior hippocampus – posterior cingulate FC, independent of age. Vascular and metabolic factors showed no significant effects on FC. These results suggest differential age-related reduction in the posterior hippocampal FC compared to the anterior hippocampus, and an age-independent effect of gender and COMT on hippocampal FC. | ['Jessica S. Damoiseaux', 'Raymond Viviano', 'P. Yuan', 'Naftali Raz'] | Differential effect of age on posterior and anterior hippocampal functional connectivity. | 698,470 |
The dominant paradigm for programs playing the game of Go is Monte Carlo tree search. This algorithm builds a search tree by playing many simulated games (playouts). Each playout consists of a sequence of moves within the tree followed by many moves beyond the tree. Moves beyond the tree are generated by a biased random sampling policy. The recently published last-good-reply policy makes moves that, in previous playouts, have been successful replies to immediately preceding moves. This paper presents a modification of this policy that not only remembers moves that recently succeeded but also immediately forgets moves that recently failed. This modification provides a large improvement in playing strength. We also show that responding to the previous two moves is superior to responding to the previous one move. Surprisingly, remembering the win rate of every reply performs much worse than simply remembering the last good reply (and indeed worse than not storing good replies at all). | ['Hendrik Baier', 'Peter Drake'] | The Power of Forgetting: Improving the Last-Good-Reply Policy in Monte Carlo Go | 165,029 |
This work presents a method to model a spherical mesh by modifying its heightmap in an augmented reality environment. Our contribution is the use of the hierarchical structure of semiregular A4-8 meshes to represent a dynamic deformable mesh suitable for modeling. It defines only a fraction of the overall terrain that is subjected to local deformations. The modeling of spherical terrains is achieved with proper subdivision constraints at the singularities of the parametric space. An error metric dependent on the observer and on the geometry of the topography was used to provide fast visualization and editing. The results demonstrate that the use of the A4-8 mesh combined with the tangible augmented reality system is flexible to shape spherical terrains and can be easily modified to deal with other topologies, such as the torus and the cylinder. | ['Renan Dembogurski', 'Bruno José Dembogurski', 'Rodrigo Luis de Souza da Silva', 'Marcelo Bernardes Vieira'] | Interactive mesh generation with local deformations in multiresolution | 12,448 |
We present a system for pose and illumination invariant face recognition that combines two recent advances in the computer vision field: 3D morphable models and component-based recognition. A 3D morphable model is used to compute 3D face models from three input images of each subject in the training database. The 3D models are rendered under varying pose and illumination conditions to build a large set of synthetic images. These images are then used for training a component-based face recognition system. The face recognition module is preceded by a fast hierarchical face detector resulting in a system that can detect and identify faces in video images at about 4 Hz. The system achieved a recognition rate of 88% on a database of 2000 real images of ten people, which is significantly better than a comparable global face recognition system. The results clearly show the potential of the combination of morphable models and component-based recognition towards pose and illumination invariant face recognition. | ['Benjamin Weyrauch', 'Bernd Heisele', 'Jennifer Huang', 'Volker Blanz'] | Component-Based Face Recognition with 3D Morphable Models | 458,648 |
In a seminal paper Phan Minh Dung (Artif. Intell. 77(2), 321---357, 1995) developed the theory of abstract argumentation frameworks (AFs), which has remained a pivotal point of reference for research in AI and argumentation ever since. This paper assesses the merits of Dung's theory from an epistemological point of view. It argues that, despite its prominence in AI, the theory of AFs is epistemologically flawed. More specifically, abstract AFs don't provide a normatively adequate model for the evaluation of rational, multi-proponent controversy. Different interpretations of Dung's theory may be distinguished. Dung's intended interpretation collides with basic principles of rational judgement suspension. The currently prevailing knowledge base interpretation ignores relevant arguments when assessing proponent positions in a debate. It is finally suggested that abstract AFs be better understood as a paraconsistent logic, rather than a theory of real argumentation. | ['Gregor Betz'] | Assessing the epistemological relevance of Dung-style argumentation theories | 586,741 |
['Heinrich Wansing'] | Formulas-as-types for a Hierarchy of Sublogics of Intuitionistic Propositional Logic. | 973,393 |
|
This paper presents the electromagnetic interference effects on the performance of locomotive onboard Global Positioning System (GPS) receivers due to the railway environment. The evaluation of the maximum tolerated interferences around a train is presented by taking into account European Committee for Electrotechnical Standardization and the International Electrotechnical Commission normative, Mobile Radio for Railway Networks in Europe, and experimental results. These interference levels are used to study the performance of a hardware GPS receiver operating at different modes and considering the possible levels of GPS signal L1 at the earth surface. From the obtained results, it is possible to conclude the reliability of a low-cost GPS receiver for train positioning even if the train equipment has been designed at the threshold of the current normative. | ['Eduard Bertran', 'José Antonio Delgado-Penín'] | On the use of GPS receivers in railway environments | 406,271 |
['Simone Teufel', 'Hans van Halteren'] | Agreement in Human Factoid Annotation for Summarization Evaluation. | 746,663 |
|
We present Metropolis Photon Sampling (MPS), a visual importance-driven algorithm for populating photon maps. Photon Mapping and other particle tracing algorithms fail if the photons are poorly distributed. Our approach samples light transport paths that join a light to the eye, which accounts for the viewer in the sampling process and provides information to improve photon storage. Paths are sampled with a Metropolis-Hastings algorithm that exploits coherence among important light paths. We also present a technique for including user selected paths in the sampling process without introducing bias. This allows a user to provide hints about important paths or reduce variance in specific parts of the image. We demonstrate MPS with a range of scenes and show quantitative improvements in error over standard Photon Mapping and Metropolis Light Transport. | ['Shaohua Fan', 'Stephen Chenney', 'Yu-Chi Lai'] | Metropolis photon sampling with optional user guidance | 367,480 |
Real time control of holonic manufacturing systems requires a radically different approach from that of traditional unit level regulatory control. Because they need to automatically adapt and reconfigure based on the ever changing requirements of the manufacturing system, control systems based on this approach are termed metamorphic control systems. The engineering of such software centric metamorphic control systems for dynamically reconfigurable distributed multi sensor based holonic systems is addressed. An integrated and uniform event driven control architecture is specified for various functional levels of metamorphic control system. This architecture utilizes the emerging International Electrotechnical Commission function block standard (IEC 1499) for industrial process measurement and control systems to specify the requisite behavior of distributed control software components (agents). | ['Sivaram Balasubramanian', 'Robert W. Brennan', 'Douglas H. Norrie'] | Requirements for holonic manufacturing systems control | 146,207 |
This paper describes the development and empirical testing of an intelligent tutoring system (ITS) with two emerging methodologies: (1) a partially observable Markov decision process (POMDP) for representing the learner model and (2) inquiry modeling, which informs the learner model with questions learners ask during instruction. POMDPs have been successfully applied to non-ITS domains but, until recently, have seemed intractable for large-scale intelligent tutoring challenges. New, ITS-specific representations leverage common regularities in intelligent tutoring to make a POMDP practical as a learner model. Inquiry modeling is a novel paradigm for informing learner models by observing rich features of learners' help requests such as categorical content, context, and timing. The experiment described in this paper demonstrates that inquiry modeling and planning with POMDPs can yield significant and substantive learning improvements in a realistic, scenario-based training task. | ['Jeremiah T. Folsom-Kovarik', 'Gita Sukthankar', 'Sae Schatz'] | Integrating learner help requests using a POMDP in an adaptive training system | 561,781 |
Modern conflict-driven clause-learning SAT solvers routinely solve large real-world instances with millions of clauses and variables in them. Their success crucially depends on effective branching heuristics. In this paper, we propose a new branching heuristic inspired by the exponential recency weighted average algorithm used to solve the bandit problem. The branching heuristic, we call CHB, learns online which variables to branch on by leveraging the feedback received from conflict analysis. We evaluated CHB on 1200 instances from the SAT Competition 2013 and 2014 instances, and showed that CHB solves significantly more instances than VSIDS, currently the most effective branching heuristic in widespread use. More precisely, we implemented CHB as part of the MiniSat and Glucose solvers, and performed an apple-to-apple comparison with their VSIDS-based variants. CHB-based MiniSat (resp. CHB-based Glucose) solved approximately 16.1% (resp. 5.6%) more instances than their VSIDS-based variants. Additionally, CHB-based solvers are much more efficient at constructing first preimage attacks on step-reduced SHA-1 and MD5 cryptographic hash functions, than their VSIDS-based counterparts. To the best of our knowledge, CHB is the first branching heuristic to solve significantly more instances than VSIDS on a large, diverse benchmark of real-world instances. | ['Jia Hui Liang', 'Vijay Ganesh', 'Pascal Poupart', 'Krzysztof Czarnecki'] | Exponential recency weighted average branching heuristic for SAT solvers | 962,103 |
In this paper, congestion dynamics along crowded freeway corridors are modeled as a conservation law with a source term that is continuous in space. The source term represents the net inflow from ramps, postulated here as a location-dependent function of the demand for entering and exiting the corridor. Demands are assumed to be time-independent, which is appropriate for understanding the onset of congestion. Numerical and analytical results reveal the existence of four well-defined regions in time-space, two of which are transient. The conditions for the existence of congestion both in the freeway and in the on-ramps are identified, as well as the set of on-ramps that are most likely to become active bottlenecks. The results in this paper help explain the stochastic nature of bottleneck activation, and can be applied to devise effective system-wide ramp metering strategies that would prevent excessively long on-ramp queues. | ['Jorge A. Laval', 'Ludovic Leclercq'] | Continuum Approximation for Congestion Dynamics Along Freeway Corridors | 522,950 |
Multiple motion model (MMM) filters are a well-known approach for addressing rapidly maneuvering, noncooperative targets. Jump-Markov models provide the most well-known theoretical foundation for MMM filters. This paper addresses the problem of how to correctly generalize jump-Markov models to multitarget systems. Given this generalization, the jump-Markov version of the multisensor-multitarget Bayes filter is introduced. Then CPHD filter and PHD filter approximations of the jump-Markov multitarget Bayes filter are derived and compared with previous approaches. | ['Ronald P. S. Mahler'] | On multitarget jump-Markov filters | 113,282 |
Caching data by maintaining materialized views typically requires updating the cache appropriately to reflect dynamic source updates. Extensive research has addressed the problem of incremental view maintenance for relational data but only few works have addressed it for semi-structured data. In this paper we address the problem of incremental maintenance of views defined over XML documents using path-expressions. The approach described in this paper has the following main features that distinguish it from the previous works: (1) The view specification language is powerful and standardized enough to be used in realistic applications. (2) The size of the auxiliary data maintained with the views depends on the expression size and the answer size regardless of the source data size.(3) No source schema is assumed to exist; the source data can be any general well-formed XML document. Experimental evaluation is conducted to assess the performance benefits of the proposed approach. | ['Arsany Sawires', 'Junichi Tatemura', 'Oliver Po', 'Divyakant Agrawal', 'K. Selçuk Candan'] | Incremental maintenance of path-expression views | 307,231 |
We outline the main issues when designing interactive multimedia systems for children and propose a unified approach of acoustic, linguistic, and dialog modeling to system development. The acoustic, linguistic and dialog data collected in a Wizard of Oz experiment from 160 children ages 8-14 playing an interactive computer game are analyzed and children-specific modeling issues are presented. Age-dependent and modality-dependent dialog flow patterns are identified. Furthermore, extraneous speech patterns, linguistic variability and disfluencies are investigated in spontaneous children's speech, and important new results are reported. Finally, baseline automatic speech recognition results are presented for various tasks using simple acoustic and language models. | ['Alexandros Potamianos', 'Shrikanth Narayanan'] | Spoken dialog systems for children | 455,742 |
The Large Hadron Collider (LHC) running at CERN will soon be upgraded to increase its luminosity giving rise to radiations reaching the level of GigaRad Total Ionizing Dose (TID). This paper investigates the impact of such high radiation on transistors fabricated in a commercial 28 nm bulk CMOS process with the perspective of using it for the future silicon-based detectors. The DC electrical behavior of nMOSFETs is studied up to 1 Grad TID. All tested devices demonstrate to withstand that dose without any radiation-hard layout techniques. In spite of that, they experience a significant drain leakage current increase which may affect normal device operation. In addition, a moderate threshold voltage shift and subthreshold slope degradation is observed. These phenomena have been linked to radiation-induced effects like interface and switching oxide traps, together with parasitic side-wall transistors. | ['Alessandro Pezzotta', 'Chun-Min Zhang', 'Farzan Jazaeri', 'Claudio Bruschini', 'Giulio Borghello', 'F. Faccio', 'S. Mattiazzo', 'A. Baschirotto', 'Christian C. Enz'] | Impact of GigaRad Ionizing Dose on 28 nm bulk MOSFETs for future HL-LHC | 846,682 |
For the purpose of uncertainty quantification with collocation, a method is proposed for generating families of one-dimensional nested quadrature rules with positive weights and symmetric nodes. This is achieved through a reduction procedure: we start with a high-degree quadrature rule with positive weights and remove nodes while preserving symmetry and positivity. This is shown to be always possible, by a lemma depending primarily on Caratheodory's theorem. The resulting one-dimensional rules can be used within a Smolyak procedure to produce sparse multi-dimensional rules, but weight positivity is lost then. As a remedy, the reduction procedure is directly applied to multi-dimensional tensor-product cubature rules. This allows to produce a family of sparse cubature rules with positive weights, competitive with Smolyak rules. Finally the positivity constraint is relaxed to allow more flexibility in the removal of nodes. This gives a second family of sparse cubature rules, in which iteratively as many nodes as possible are removed. The new quadrature and cubature rules are applied to test problems from mathematics and fluid dynamics. Their performance is compared with that of the tensor-product and standard Clenshaw–Curtis Smolyak cubature rule. | ['L. M. M. van den Bos', 'Barry Koren', 'Richard P. Dwight'] | Non-intrusive uncertainty quantification using reduced cubature rules | 961,714 |
['Richard M. Zahoransky', 'Saher Semaan', 'Klaus Rechert'] | Identity and Access Management for Complex Research Data Workflows. | 787,245 |
|
PROtein Domain Organization and Comparison (PRODOC) comprises several programs that enable convenient comparison of proteins as a sequence of domains. The in-built dataset currently consists of 698 000 proteins from 192 organisms with complete genomic data, and all the SWISSPROT proteins obtained from the Pfam database. All the entries in PRODOC arerepresentedasasequenceoffunctional domains, assigned using hidden Markov models, instead of as a sequence of amino acids. On average 69% of the proteins in the proteomes and 49% of the residues are covered by functional domain assignments.Softwaretoolsallowtheusertoquerythedataset with a sequence of domains and identify proteins with the same or a jumbled or circularly permuted arrangement of domains. As it is proposed that proteins with jumbled or the same domain sequences have similar functions, this search tool is useful in assigning the overall function of a multi-domain protein. Unique features of PRODOC include the generation of alignments between multi-domain proteins on the basis of the sequence of domains and in-built information on distantly related domain families forming superfamilies. It is also possible using PRODOC to identify domain sharing and gene fusion events across organisms. An exhaustive genome– genome comparison tool in PRODOC also enables the detection of successive domain sharing and domain fusion events across two organisms. The tool permits the identification of gene clusters involvedinsimilarbiologicalprocessesintwoclosely related organisms. The URL for PRODOC is http:// hodgkin.mbu.iisc.ernet.in/~prodoc. | ['Oruganty Krishnadev', 'Nambudiry Rekha', 'Shashi B. Pandit', 'Saraswathi Abhiman', 'Smita Mohanty', 'Lakshmipuram S. Swapna', 'S. Gore', 'Narayanaswamy Srinivasan'] | PRODOC : a resource for the comparison of tethered protein domain architectures with in-built information on remotely related domain families | 217,665 |
We consider the sequential portfolio investment problem. Building on results in signal processing, machine learning, and other areas, we use factor graphs to develop new universal portfolio algorithms for switching strategies under transaction costs. These algorithms make use of a transition diagram in order to compactly represent and compute message passing on an exponentially increasing number of factor graphs. We compare this with a previous universal switching portfolios, demonstrating typically superior performance. | ['Andrew J. Bean', 'Andrew C. Singer'] | Factor graph switching portfolios under transaction costs | 526,804 |
We show that the decision function of a radial basis function (RBF) classifier is equivalent in form to the Bayes-optimal discriminant associated with a special kind of mixture-based statistical model. The relevant mixture model is a type of mixture-of-experts model for which class labels, like continuous-valued features, are assumed to have been generated randomly, conditional on the mixture component of origin. The new interpretation shows that RBF classifiers effectively assume a probability model, which, moreover, is easily determined given the designed RBF. This interpretation also suggests a statistical learning objective as an alternative to standard methods for designing the RBF-equivalent models. The statistical objective is especially useful for incorporating unlabeled data to enhance learning. Finally, it is observed that any new data to classify are simply additional unlabeled data. Thus, we suggest a combined learning and use paradigm, to be invoked whenever there are new data to classify. | ['David J. Miller', 'Hasan S. Uyar'] | Combined learning and use for a mixture model equivalent to the RBF classifier | 172,223 |
['Miroslav Uhrina', 'Miroslav Voznak', 'Martin Vaculík', 'Michal Malicek'] | Procedure for Mapping Objective Video Quality Metrics to the Subjective MOS Scale for Full HD Resolution of H.265 Compression Standard | 641,855 |
|
As databases get widely deployed, it becomes increasingly important to reduce the overhead of database administration. An important aspect of data administration that critically influences performance is the ability to select indexes for a database. In order to decide the right indexes for a database, it is crucial for the database administrator (DBA) to be able to perform a quantitative analysis of the existing indexes. Furthermore, the DBA should have the ability to propose hypothetical (“what-if”) indexes and quantitatively analyze their impact on performance of the system. Such impact analysis may consist of analyzing workloads over the database, estimating changes in the cost of a workload, and studying index usage while taking into account projected changes in the sizes of the database tables. In this paper we describe a novel index analysis utility that we have prototyped for Microsoft SQL Server 7.0. We describe the interfaces exposed by this utility that can be leveraged by a variety of front-end tools and sketch important aspects of the user interfaces enabled by the utility. We also discuss the implementation techniques for efficiently supporting “what-if” indexes. Our framework can be extended to incorporate analysis of other aspects of physical database design. | ['Surajit Chaudhuri', 'Vivek R. Narasayya'] | AutoAdmin “what-if” index analysis utility | 304,654 |
['Jérémie Cabessa', 'Jacques Duparc'] | Expressive Power of Nondeterministic Recurrent Neural Networks in Terms of their Attractor Dynamics. | 985,903 |
|
Digital image retrieval is one of the major concepts in image processing. In this paper, a novel approach is proposed to retrieve digital images from huge databases which using texture analysis techniques to extract discriminant features together with color and shape features. The proposed approach consist three steps. In the first one, shape detection is done based on TopHat transform to detect and crop main object parts of the image, especially complex ones. Second step is included a texture feature representation algorithm which used color local binary patterns and local variance as discriminant operators. Finally, to retrieve mostly closing matching images to the query, log likelihood ratio is used. In order to, decrease the computational complexity, a novel algorithm is prepared disregarding not similar categories to the query image. It is done using loglikelihood ratio as nonsimilarity measure and threshold tuning technique. The performance of the proposed approach is evaluated applying on Corel and Simplicity image sets and it compared by some of other wellknown approaches in terms of precision and recall which shows the superiority of the proposed approach. Low noise sensitivity, rotation invariant, shift invariant, gray scale invariant and low computational complexity are some of other advantages. | ['Farshad Tajeripour', 'Mohammad Saberi', 'Shervan Fekri Ershad'] | Developing a Novel Approach for Content Based Image Retrieval Using Modified Local Binary Patterns and Morphological Transform | 563,254 |
This paper presents an algorithm to compute the globally optimal fixture with frictionless contacts in a discrete point set on an object. The capability of a fixture to immobilize the object is evaluated by the minimum of the largest conflict of the object with the contacts over all motion directions, which can be reduced to the radius of the largest origin-centered ball contained in the convex hull of primitive contacts wrenches. All candidate fixtures (combinations of the discrete points of a certain number) are expressed in a tree structure, in which each node contains a discrete point representing a contact location and each path from the roots to a leaf represents a fixture. A preorder walk of the tree is performed to search for the optimal fixture. Necessary conditions are derived to predict whether the subtree of a node contains a better fixture. If any necessary condition is not met, then all the fixtures contained in the subtree can be simply rejected. By this means only a small portion of the tree will be traversed and the globally optimal fixture can be found more quickly. This algorithm has been tested on various three-dimensional objects and has been found to be over ten times faster than the brute-force search. | ['Yu Zheng'] | Computing the Globally Optimal Frictionless Fixture in a Discrete Point Set | 879,343 |
There is growing interest in developing playful experiences for animals within the field of Animal-Computer Interaction (ACI). These digital games aim to improve animals' wellbeing and provide them with enriching activities. However, little research has been conducted to analyze the factors and stimuli that could lead animals to engage with a specific game. These factors could vary among different animal species, or even between individuals of the same species. Identifying the most appropriate artifacts to attract the attention of an animal species would help in the development of engaging playful activities for them. This paper describes early findings of an observational study on cats, which evaluated their interest in different kinds of technologically-based stimuli and interaction modalities. This study and further exploration of its results would inform the development of suitable and engaging playful experiences for cats. | ['Patricia Pons', 'Javier Jaen'] | Towards the Creation of Interspecies Digital Games: An Observational Study on Cats' Interest in Interactive Technologies | 723,157 |
Consider a multilayer perceptron (MLP) with d inputs, a single hidden sigmoidal layer and a linear output. By adding an additional d inputs to the network with values set to the square of the first d inputs, properties reminiscent of higher-order neural networks and radial basis function networks (RBFN) are added to the architecture with little added expense in terms of weight requirements. Of particular interest, this architecture has the ability to form localized features in a d-dimensional space with a single hidden node but can also span large volumes of the input space; thus, the architecture has the localized properties of an RBFN but does not suffer as badly from the curse of dimensionality. I refer to a network of this type as a SQuare Unit Augmented, Radially Extended, MultiLayer Perceptron (SQUARE-MLP or SMLP). | ['Gary W. Flake'] | Square Unit Augmented, Radially Extended, Multilayer Perceptrons | 408,503 |
textabstractAn alternative smoothing method for the high dimensional max function#N#has been studied. The proposed method is a recursive extension of the#N#two dimensional smoothing functions. In order to analyze the proposed#N#method, a theoretical framework related to smoothing methods has been#N#discussed. Moreover, we support our discussion by considering some#N#application areas. This is followed by a comparison with an#N#alternative well-known smoothing method. | ['Ilker Birbil', 'Shu-Cherng Fang', 'Hans Frenk', 'Shuzhong Zhang'] | Recursive Approximation of the High Dimensional max Function | 598,251 |
Actually, various kinds of sources (such as voice, video or data) with diverse traffic characteristics and quality of service requirements (QoS), which are multiplexed at very high rates, leads to significant traffic problems such as packet losses, transmission delays, delay variations, etc, caused mainly by congestion in the networks. The prediction of these problems in real time is quite difficult, making the effectiveness of "traditional" methodologies based on analytical models questionable. This article proposed and evaluates a QoS routing policy in packets topology and irregular traffic of communications network called widest K-shortest paths Q-routing. The technique used for the evaluation signals of reinforcement is Q-learning. Compared to standard Q-routing, the exploration of paths is limited to K best non loop paths in term of hops number (number of routers in a path) leading to a substantial reduction of convergence time. In this work a proposal for routing which improves the delay factor and is based on the reinforcement learning is concerned. We use Q-learning as the reinforcement learning technique and introduce K-shortest idea into the learning process. The proposed algorithm are applied to two different topologies. The OPNET is used to evaluate the performance of the proposed algorithm. The algorithm evaluation is done for two traffic conditions, namely low load and high load. | ['Alireza Esfahani', 'Morteza Analoui'] | Widest K-Shortest Paths Q-Routing: A New QoS Routing Algorithm in Telecommunication Networks | 199,470 |
In view of the problem that it is difficult to predict the output in an oilfield which affected by multiple variables, a back propagation (BP) neural network model is built to predict the output in oilfield because the classic statistical method and static model can not meet the demand of precision for the nonlinear and uncertain system. Effective depth, permeability, porosity and water content are used as the input of neural network and oilfield output as the output of the neural network. The results show that this prediction approach is very effective and has higher accuracy. The results show that the model can forecast the oilfield output with accuracy comparable to other classic methods. So the BP neural network is an effective method to predict the oilfield output with high accuracy. The application of this approach can supply reliable data for the development of oilfield and decrease the risks for the exploitation. | ['Changjun Zhu', 'Xiujuan Zhao'] | Application of Artificial Neural Network in the Prediction of Output in Oilfield | 376,308 |
['Ioannis Paraskevopoulos', 'Emmanuel Tsekleves'] | SIMULATION OF PHOTOVOLTAICS FOR DEFENCE AND COMMERCIAL APPLICATIONS BY EXTENDING EXISTING 3D AUTHORING SOFTWARE - A Validation Study | 684,267 |
|
As main memory capacity increases, more of the database read requests will be satisfied from the buffer system. Consequently, the amount of disk write operations relative to disk read operations will increase. This calls for a focus on write optimized storage managers. We show how the Vagabond object storage manager uses no-overwrite sequential writing of long blocks to achieve high write performance. Vagabond also supports versioned/temporal objects, with the no-overwrite policy used, this does not imply any extra cost. Large objects, e.g., video and matrices, are divided into large chunks. This makes it easy to achieve high read and write bandwidth. This is important, since in many application areas, high data bandwidth is just as important as high transaction throughput. The buffer system in Vagabond is object based, rather than page based. This gives better utilization of main memory. Transparent compression of objects on disk is supported. | ['Kjetil Nørvåg', 'Kjell Bratbergsengen'] | Log-only temporal object storage | 280,113 |
Source IP addresses are often used as a major feature for user modeling in computer networks. Particularly in the field of Distributed Denial of Service (DDoS) attack detection and mitigation traffic models make extensive use of source IP addresses for detecting anomalies. Typically the real IP address distribution is strongly undersampled due to a small amount of observations. Density estimation overcomes this shortage by taking advantage of IP neighborhood relations. In many cases simple models are implicitly used or chosen intuitively as a network based heuristic. In this paper we review and formalize existing models including a hierarchical clustering approach first. In addition, we present a modified k-means clustering algorithm for source IP density estimation as well as a statistical motivated smoothing approach using the Nadaraya-Watson kernel-weighted average. For performance evaluation we apply all methods on a 90 days real world dataset consisting of 1.3 million different source IP addresses and try to predict the users of the following next 10 days. ROC curves and an example DDoS mitigation scenario show that there is no uniformly better approach: k-means performs best when a high detection rate is needed whereas statistical smoothing works better for low false alarm rate requirements like the DDoS mitigation scenario. | ['Markus Goldstein', 'Matthias Reif', 'Armin Stahl', 'Thomas M. Breuel'] | Server-Side Prediction of Source IP Addresses Using Density Estimation | 328,620 |
A control strategy based on single current sensor is proposed for a four-switch three-phase brushless DC (BLDC) motor system to lower cost and improve performance. The system's whole working process is divided into two groups. In modes 2, 3, 5, and 6, where phase c works, phase- c current is sensed to control phases a and b , and phase- c current is consequently regulated. In modes 1 and 4, the combination of four suboperating modes for controlling phase- c current is proposed based on detailed analysis on the different rules that these operating modes have on phase- c current. Phase- c current is maintained at nearly zero level first, and phase- a and phase- b currents are regulated by speed circle. To improve control performance, a single-neuron adaptive proportional-integral (PI) algorithm is adopted to realize the speed regulator. Simulation and experimental systems are set up to verify the proposed strategy. According to simulation and experimental results, the proposed strategy shows good self-adapted track ability with low current ripple and strong robustness to the given speed reference model. Also, the structure of the drive is simplified. | ['Changliang Xia', 'Zhiqiang Li', 'Tingna Shi'] | A Control Strategy for Four-Switch Three-Phase Brushless DC Motor Using Single Current Sensor | 371,894 |
We propose a method for analyzing trade-off between security policies for Java mobile codes and requirements for Java application. We assume that mobile codes are downloaded from different sites, they are used in an application on a site, and their functions are restricted by security policies on the site. We clarify which functions to be performed under the policies on the site using our tool [H. Kaiya et al., (2002)]. We also clarify which functions are needed so as to meet the requirements for the application by goal oriented requirements analysis (GORA). By comparing functions derived from the policies and functions from the requirements, we find conflicts between the policies and the requirements, and also find vagueness of the requirements. | ['Haruhiko Kaiya', 'Kouta Sasaki', 'Yasunori Maebashi', 'Kenji Kaijiri'] | Trade-off analysis between security policies for Java mobile codes and requirements for Java application | 287,989 |
Neighbor discovery (ND) is a basic and crucial step for initializing wireless ad hoc networks. A fast, precise, and energy-efficient ND protocol has significant importance to subsequent operations in wireless networks. However, many existing protocols have a high probability of generating idle slots in their neighbor discovering processes, which prolongs the executing duration, thus compromising their performance. In this paper, we propose a novel randomized protocol FRIEND, which is a prehandshaking ND protocol, to initialize synchronous full-duplex wireless ad hoc networks. By introducing a prehandshaking strategy to help each node be aware of activities of its neighborhood, we significantly reduce the probabilities of generating idle slots and collisions. Moreover, with the development of single-channel full-duplex communication technology, we further decrease the processing time needed in FRIEND and construct the first fullduplex ND protocol. Our theoretical analysis proves that FRIEND can decrease the duration of ND by up to 48% in comparison with classical ALOHA-like protocols. In addition, we propose HD-FRIEND for half-duplex networks and variants of FRIEND for multihop and duty-cycled networks. Both theoretical analysis and simulation results show that FRIEND can adapt to various scenarios and significantly decrease the duration of ND. | ['Guobao Sun', 'Fan Wu', 'Xiaofeng Gao', 'Guihai Chen', 'Wei Wang'] | Time-Efficient Protocols for Neighbor Discovery in Wireless Ad Hoc Networks | 493,831 |
In this paper, we employ a monitor-based approach for on-chip bus (OCB) compliance test. To describe the OCB protocols, we propose a FSM model, which can help to extract the necessary properties systematically and verify the data part of a bus transfer efficiently. To demonstrate our methodology, we illustrate two OCB protocols, WISHBONE and AMBA AHB, as the study cases. The experimental results show that we can verify the OCB protocols efficiently and detect the design errors when tests fail. | ['Hue-Min Lin', 'Chia-Chih Yen', 'Che-Hua Shih', 'Jing-Yang Jou'] | On compliance test of on-chip bus for SOC | 494,860 |
['Mark D. Fairchild'] | The HDR Photographic Survey. | 740,266 |
|
The growing popularity of hosted storage services and shared storage infrastructure in data centers is driving the recent interest in resource management and QoS in storage systems. The bursty nature of storage workloads raises significant performance and provisioning challenges, leading to increased resource requirements, management costs, and energy consumption. We present a novel workload shaping framework to handle bursty workloads, where the arrival stream is dynamically decomposed to isolate its bursts, and then rescheduled to exploit available slack. We show how decomposition reduces the server capacity requirements and power consumption significantly, while affecting QoS guarantees minimally. We present an optimal decomposition algorithm RTT and a recombination algorithm Miser, and show the benefits of the approach by evaluating the performance of several storage workloads using both simulation and Linux implementation. | ['Lanyue Lu', 'Peter J. Varman', 'Kshitij A. Doshi'] | Decomposing Workload Bursts for Efficient Storage Resource Management | 194,013 |
Traditionally many proofs in real time scheduling theory were informal and lacked the rigor usually required for good mathematical proofs. Some attempts have been made towards making the proofs more reliable, including using formal logics to specify scheduling algorithms and verify their properties. In particular, Duration Calculus, a real time interval temporal logic, has been used since timing requirements in scheduling can be naturally associated with intervals. This paper aims to improve the work in this area and give a summary. Static and dynamic priority scheduling algorithms are formalised in Duration Calculus and classical theorems for schedulability analysis are proven using the formal proof system of Duration Calculus. | ['Qiwen Xu', 'Naijun Zhan'] | Formalising scheduling theories in duration calculus | 92,877 |
Peer-to-peer content-distribution networks are nowadays highly popular among users that have stationary computers with high-bandwidth Internet connections. Mobile devices (e.g. cell phones) that are connected to the Internet via cellular-radio networks, however, could not yet be launched into this field to a satisfactory extent. Although most mobile devices have the necessary hardware resources for joining peer-to-peer content-distribution networks, they are often not able to benefit from participation, due to limitations caused by mobility. In this work, mobile devices are identified as providers of advanced mobile features and services that are usually not available to computers in stationary networks. These mobile features and services can be exchanged for services in peer-to-peer networks, turning mobile devices into valuable trading partners. Partnership schemes are set up to define the way of a fair cooperation between mobile devices and other peers. A novel peer-to-peer architecture is suggested that applies partnership schemes to a well-established peer-to-peer content-distribution network and facilitates the integration of mobile devices. | ['Andreas Berl', 'Hermann de Meer'] | Integration of Mobile Devices into Popular Peer-to-Peer Networks | 366,709 |
This paper discusses how the Internet can facilitate cultural expression that resists the homogenizing effects of globalization. It examines how local cultures adapt their linguistic behavior and language choices to the Internet and express themselves in culturally meaningful ways without being subsumed by a global agenda. The research reported in this paper is based on a survey administered in Uzbekistan, a post-Soviet, multilingual society that is experiencing the pressures of global culture as well as Russian culture. Literature about language, nationalism, and Internet use in multilingual societies is presented, and the linguistic setting of Uzbekistan is described. The results of the survey relevant to Internet use, online language choices, and perceptions of language on the Web are reported here. | ['Carolyn Wei', 'Beth E. Kolko'] | Resistance to globalization: Language and Internet diffusion patterns in Uzbekistan | 335,070 |
['Shakhnaz Akhmedova', 'Eugene Semenkin', 'Vladimir Stanovov'] | Fuzzy Rule-Based Classifier Design with Co-operation of Biology Related Algorithms | 845,910 |
|
Nowadays most of modern digital TV and broadcast systems adopt a novel network technology, namely single frequency network (SFN). In SFN the same data stream is broadcasted from multiple transmitters in the same frequency band. Thus SFN-based passive radar (SPR) could generate multiple simultaneous measurements from the same target, introducing a measurement-to-transmitter association ambiguity (MTAA). False association could lead to ghosts. In this paper, we present a solvability analysis for MTAA under 2-dimensional (2-D) space and well-separated targets case. The purpose of this solvability analysis is to obtain the simplest sufficient condition for resolving MTAA. Besides, an association hypothesis decision (AHD) method is proposed for deghosting. Numerical simulations demonstrate that AHD is an effective method against ghosts under the SPR scenario. | ['Jianxin Yi', 'Xianrong Wan', 'Feng Cheng', 'Zhixin Zhao', 'Hengyu Ke'] | Deghosting for target tracking in single frequency network based passive radar | 603,579 |
['Rallis C. Papademetriou', 'Vasilios Pasias'] | NetLab: An interactive simulation environment for communication networks. | 546,863 |
|
The problem of multiuser detection in multipath CDMA channels with a receiver antenna array is considered. The optimal space-time multiuser receiver structure is first outlined, followed by linear space-time multiuser detection methods based on iterative interference cancellation. Blind adaptive space-time multiuser detection techniques are also developed. It is seen that the proposed multiuser space-time processing techniques offer substantial performance gains over the single-user-based methods, especially in a near-far situation. | ['Xiaodong Wang', 'H.V. Poor'] | Space-time processing in multiple-access systems | 299,561 |
In a real-time database system, it is difficult to meet all of the timing constraints due to the consistency requirements of the underlying database. However, when the transactions in the system are heterogeneous, they are not all of the same importance-some are of greater importance than others. In this paper, we propose a new protocol called OCC-PDATI (Optimistic Concurrency Control Protocol using Dynamic Adjustment of serialization order and Transaction Importance), which uses information about the importance of the transactions in the conflict resolution. Performance studies of our protocol have been carried out in a prototype real-time database system. The results clearly indicate that OCC-PDATI meets the goal of favoring transactions of high importance. | ['Jan Lindström', 'Kimmo E. E. Raatikainen'] | Using importance of transactions and optimistic concurrency control in firm real-time databases | 235,287 |
['Stefan Schlobach', 'Krzysztof Janowicz'] | Selected papers from the combined EKAW 2014 and Semantic Web journal track | 734,613 |
|
We introduce a new framework for knowledge extraction from written texts and diagrams and utilization of the obtained knowledge. We use a combination of multiple media for effective communication. To realize this mechanism as a hyper-medium, we propose a noble idea of media integration. First, the correlation between the two media is analyzed to automatically interpret the semantic structure of both media. The integrated hypermedia obtained as a result of the correspondence and the semantic interpretation is quite useful to derive knowledge about topics being described. To demonstrate the usefulness of the integrated media, we have constructed a prototype system for flexible explanation generation. Various kinds of explanation can be generated in our system. | ['Yuichi Nakamura', 'Miwa Takahashi', 'Masayuki Onda', 'Yuichi Ohta'] | Knowledge extraction from diagram and text for media integration | 309,206 |
The error correction capability of bit-flipping decoding algorithm for low density parity-check (LDPC) codes is studied by introducing variable node adjacency (VNA) graphs which are derived from Tanner graphs of LDPC codes. For codes with column weight λ and girth g = 8, it can be shown that error patterns of weight less than or equal to λ − 1 can be corrected. This result implies that the bit-flipping algorithm could decode up to the random error-correcting capability over binary symmetric channel for girth 8 codes whose random error-correcting capability is equal to λ − 1. | ['Wen-Yao Chen', 'Chung-Chin Lu'] | On error correction capability of bit-flipping algorithm for LDPC codes | 321,079 |
Thermoelectric power sensors used as power transfer standards are promising devices for further enhancements of the microcalorimetric technique in the high-frequency field. A coaxial microcalorimeter has been studied, based on thermoelectric power sensors, at the Istituto Nazionale di Ricerca Metrologica (INRIM). In the literature, several models have been proposed, considering different calibration processes and error sources. Hereby, we analyze these models in terms of the total uncertainty for the 3.5-mm coaxial line case between 10 MHz and 26.5 GHz. The advantages and disadvantages of the models are highlighted. | ['L. Brunetti', 'L. Oberto', 'Marco Sellone', 'E. Vremera'] | Comparison Among Coaxial Microcalorimeter Models | 50,728 |
The population projection of the Indian subcontinent, which is closely related to the future development of this region and even the whole world, has catched great attention among sociologists as well as scientists. However, most of the former researches are just based on fertility, mortality or other individual quantifiable factors by using some traditional statistical models and thus may lack all-sidedness in their results. The historical population data are the comprehensive reflection of population development under the influence of all factors. Based on the historical population data over 2000 years, a feedforward neuronet equipped with the weights-and-structure-determination (WASD) algorithm aided by twice-pruning (TP) technique is constructed. Besides, by introducing the cubic spline and error evaluation methods, the neuronet shows great performance in the population projection. Due to the marvelous learning and generalization abilities of the presented TP-aided WASD neuronet, we successfully draw up the population projection for the Indian subcontinent. | ['Yunong Zhang', 'Wan Li', 'Liangyu He', 'Junqiao Qiu', 'Hongzhou Tan'] | Population projection of the Indian subcontinent using TP-aided WASD neuronet | 918,008 |
In this paper, we propose a partially-parallel irregular LDPC decoder for IEEE 802.11n standard. The design is based on a novel sum-delta message passing schedule to achieve high throughput and low area cost design. We further improve the design with pipeline structure and parallel computation. The synthesis result in TSMC 0.18 CMOS technology demonstrates that for (648,324) irregular LDPC code, our decoder achieves 7.5X improvement in throughput, which reaches 402 Mbps at the frequency of 200MHz, with 11% area reduction. | ['Wen Ji', 'Yuta Abe', 'Takeshi Ikenaga', 'Satoshi Goto'] | A high performance LDPC decoder for IEEE802.11n standard | 346,967 |
In this paper, we present a large-scale object retrieval system. The user supplies a query object by selecting a region of a query image, and the system returns a ranked list of images that contain the same object, retrieved from a large corpus. We demonstrate the scalability and performance of our system on a dataset of over 1 million images crawled from the photo-sharing site, Flickr [3], using Oxford landmarks as queries. Building an image-feature vocabulary is a major time and performance bottleneck, due to the size of our dataset. To address this problem we compare different scalable methods for building a vocabulary and introduce a novel quantization method based on randomized trees which we show outperforms the current state-of-the-art on an extensive ground-truth. Our experiments show that the quantization has a major effect on retrieval quality. To further improve query performance, we add an efficient spatial verification stage to re-rank the results returned from our bag-of-words model and show that this consistently improves search quality, though by less of a margin when the visual vocabulary is large. We view this work as a promising step towards much larger, "web-scale " image corpora. | ['James Philbin', 'Ondrej Chum', 'Michael Isard', 'Josef Sivic', 'Andrew Zisserman'] | Object retrieval with large vocabularies and fast spatial matching | 235,227 |
There is a strong need for a new Internet Protocol to be used in the future networks rather than the existing Internet protocol (IPv4). So, IPv6 is coming with new features and improvements for the devices that will be connected to the network. Therefore, some transition mechanisms are required during the time of migration from IPv4 to IPv6 networks. Furthermore, the BDMS has been proposed and designed as an IPv4/IPv6 translation mechanism. In this paper, we implement the BDMS in order to study its behavior when different buffer sizes are used. The OMNeT++ simulator is used to evaluate the performance of BDMS using different performance evaluation metrics such as RTT, EED, and Total Queuing Delay for each communication session between IPv4 and IPv6 hosts. Thus,the simulation results in this paper show that, when the translator buffer size increases, the RTT and EED for this communication session increase as well. | ["Ra'ed AlJa'afreh", 'John Mellor', 'Irfan Awan'] | Implementation of IPv4/IPv6 BDMS Translation Mechanism | 411,402 |
In many algorithms for background modeling, a distribution over feature values is modeled at each pixel. These models, however, do not account for the dependencies that may exist among nearby pixels. The joint domain-range kernel density estimate (KDE) model by Sheikh and Shah [7], which is not a pixel-wise model, represents the background and foreground processes by combining the three color dimensions and two spatial dimensions into a five-dimensional joint space. The Sheikh and Shah model, as we will show, has a peculiar dependence on the size of the image. In contrast, we build three-dimensional color distributions at each pixel and allow neighboring pixels to influence each other’s distributions. Our model is easy to interpret, does not exhibit the dependency on image size, and results in higher accuracy. Also, unlike Sheikh and Shah, we build an explicit model of the prior probability of the background and the foreground at each pixel. Finally, we use the adaptive kernel variance method of Narayana et al. [5] to adapt the KDE covariance at each pixel. With a simpler and more intuitive model, we can better interpret and visualize the effects of the adaptive kernel variance method, while achieving accuracy comparable to state-of-the-art on a standard backgrounding benchmark. | ['Manjunath Narayana', 'Allen R. Hanson', 'Erik G. Learned-Miller'] | Improvements in Joint Domain-Range Modeling for Background Subtraction | 381,007 |
This paper presents a statistical algorithm for collaborative mobile robot localization. Our approach uses a sample-based version of Markov localization, capable of localizing mobile robots in an any-time fashion. When teams of robots localize themselves in the same environment, probabilistic methods are employed to synchronize each robot's belief whenever one robot detects another. As a result, the robots localize themselves faster, maintain higher accuracy, and high-cost sensors are amortized across multiple robot platforms. The technique has been implemented and tested using two mobile robots equipped with cameras and laser range-finders for detecting other robots. The results, obtained with the real robots and in series of simulation runs, illustrate drastic improvements in localization speed and accuracy when compared to conventional single-robot localization. A further experiment demonstrates that under certain conditions, successful localization is only possible if teams of heterogeneous robots collaborate during localization. | ['Dieter Fox', 'Wolfram Burgard', 'Hannes Kruppa', 'Sebastian Thrun'] | A Probabilistic Approach to Collaborative Multi-Robot Localization | 105,270 |
This paper studies charged device model electrostatic discharge (CDM-ESD) events in die stacking process of 3D-ICs and investigates CDM-ESD protection circuits for individual TSVs to prevent high voltage stress on transistor connected to TSV. The models for power, area, delay, and signal integrity of TSVs considering ESD protection are presented. The models are used to drive a methodology to design reliable 3D-ICs considering CDM-ESD while minimizing the overheads. We study the impact of ESD protection on die-to-die asynchronous interface circuit. | ['Duckhwan Kim', 'Saibal Mukhopadhyay'] | On the Design of Reliable 3D-ICs Considering Charged Device Model ESD Events During Die Stacking | 58,490 |
The service orientation approach emerged form the software engineering community and has now become a widely discussed design paradigm for almost every aspect of an enterprise architecture (EA). However, experience from cases studies has shown that it is necessary to explicitly differentiate service categories in EA, its goals and its resulting design guidelines. This paper derives a sophisticated understanding of different services categories, their respective goals and design guidelines based on empirical research. | ['Stephan Aier', 'Bettina Gleichauf'] | Towards a Sophisticated Understanding of Service Design for Enterprise Architecture | 552,508 |
In this paper, we present a novel framework for supporting the management and optimization of application subject to software anomalies and deployed on large scale cloud architectures, composed of different geographically distributed cloud regions. The framework uses machine learning models for predicting failures caused by accumulation of anomalies. It introduces a novel workload balancing approach and a proactive system scale up/scale down technique. We developed a prototype of the framework and present some experiments for validating the applicability of the proposed approaches. | ['Dimiter R. Avresky', 'Pierangelo Di Sanzo', 'Alessandro Pellegrini', 'Bruno Ciciani', 'Luca Forte'] | Proactive Scalability and Management of Resources in Hybrid Clouds via Machine Learning | 611,493 |
Signal identification is an umbrella term for signal processing techniques designed for the identification of the transmission parameters of unknown or partially known communication signals. Initially, a key technology for military applications such as signal interception, radio surveillance and electronic warfare, signal identification techniques recently found applications in commercial wireless communications as an enabling technology for cognitive receivers. With the advance and rapid adoption of multiple-input multiple-output (MIMO) communication systems in the last decade, extension of signal identification methods to include this transmission paradigm has become a priority and focus of intensive research efforts. The aim of this work is to provide a comprehensive state-of-the-art survey on algorithms proposed for the new and challenging signal identification problems specific to MIMO systems, including space-time block code (STBC) identification, MIMO modulation identification, and detection of the number of transmit antennas. Finally, concluding remarks on MIMO signal identification are provided along with an outline of the open problems and future research directions. | ['Yahia A. Eldemerdash', 'Octavia A. Dobre', 'Menguc Oner'] | Signal Identification for Multiple-Antenna Wireless Systems: Achievements and Challenges | 716,185 |
Adaptation is an essential capability for intelligent robots to work in new environments. In the learning framework of Programming by Demonstration (PbD) and Reinforcement Learning (RL), a robot usually learns skills from a latent feature space obtained by dimension reduction techniques. Because the latent space is optimized for a specific environment during the training phase, it typically contains fewer variations. Accordingly, searching for a solution within the latent space can be less effective for robot adaptation to new environments with unseen changes. In this paper, we propose a novel Feature Space Decomposition (FSD) approach to effectively address the robot adaptation problem, which is directly applicable to the learning framework based on PbD and RL. Our FSD method decomposes the high-dimensional original features extracted from the demonstration data into principal and non-principal feature space. Then, the non-principal features are used to form a new low-dimensional search space for autonomous robot adaptation based on RL, which is initialized using a generalized trajectory represented by a Gaussian Mixture Model that is learned from the principal features. The scalability of our FSD approach guarantees that optimal solutions can be found in the new non-principal space, if they exist in the original feature space. Experimental results on real robots validate that our FSD approach enables the robots to effectively adapt to new environments, and is usually able to find optimal solutions more quickly than traditional approaches when significant environment changes occur. | ['Chi Zhang', 'Hao Zhang', 'Lynne E. Parker'] | Feature Space Decomposition for effective robot adaptation | 585,354 |
As software engineering researchers, we are also zealous tool smiths. Building a research prototype is often a daunting task, let alone building an industry-grade family of tools supporting multiple platforms to ensure the generalizability of results. In this paper, we give advice to academic and industrial tool smiths on how to design and build an easy-to-maintain architecture capable of supporting multiple integrated development environments (IDEs). Our experiences stem from WatchDog, a multi-IDE infrastructure that assesses developer testing activities in vivo and that over 2,000 registered developers use. To these software engineering practitioners, WatchDog provides real-time and aggregated feedback in the form of individual testing reports. Project Website: http://www.testroots.org Demonstration Video: https://youtu.be/zXIihnmx3UE | ['Moritz Beller', 'Igor Levaja', 'Annibale Panichella', 'Georgios Gousios', 'Andy Zaidman'] | How to catch 'em all: WatchDog, a family of IDE plug-ins to assess testing | 654,295 |
The reachable sets of a differential inclusion have nonsmooth topological boundaries in general. The main result of this paper is that under the well-known assumptions of Filippov's existence theorem (about solutions of differential inclusions), every epi-Lipschitzian initial compact set K C R N preserves this regularity for a short time, i.e. ϑ F (t, K) is also epi-Lipschitzian for all small t > 0. The proof is based on Rockafellar's geometric characterization of epi-Lipschitzian sets and uses a new result about the "inner semicontinuity" of Clarke tangent cone (t, y) → T C ϑF (t K) (y) C R N with respect to both arguments. | ['Thomas Lorenz'] | Epi-Lipschitzian reachable sets of differential inclusions | 347,709 |
Background#R##N#Accurate genotype calling is a pre-requisite of a successful Genome-Wide Association Study (GWAS). Although most genotyping algorithms can achieve an accuracy rate greater than 99% for genotyping DNA samples without copy number alterations (CNAs), almost all of these algorithms are not designed for genotyping tumor samples that are known to have large regions of CNAs. | ['Shengping Yang', 'Xiangqin Cui', 'Zhide Fang'] | BCRgt: a Bayesian cluster regression-based genotyping algorithm for the samples with copy number alterations | 461,383 |
Symbolic execution is a popular technique for automatically generating test cases achieving high structural coverage. Symbolic execution suffers from scalability issues since the number of symbolic paths that need to be explored is very large (or even infinite) for most realistic programs. To address this problem, we propose a technique, Simple Static Partitioning , for parallelizing symbolic execution. The technique uses a set of pre-conditions to partition the symbolic execution tree, allowing us to effectively distribute symbolic execution and decrease the time needed to explore the symbolic execution tree. The proposed technique requires little communication between parallel instances and is designed to work with a variety of architectures, ranging from fast multi-core machines to cloud or grid computing environments. We implement our technique in the Java PathFinder verification tool-set and evaluate it on six case studies with respect to the performance improvement when exploring a finite symbolic execution tree and performing automatic test generation. We demonstrate speedup in both the analysis time over finite symbolic execution trees and in the time required to generate tests relative to sequential execution, with a maximum analysis time speedup of 90x observed using 128 workers and a maximum test generation speedup of 70x observed using 64 workers. | ['Matt Staats', 'Corina S. Pǎsǎreanu'] | Parallel symbolic execution for structural test generation | 512,862 |
As the success of the Web increasingly brings us towards a fully connected world, home networking systems that connect and manage home appliances become the natural next step to complete the connectivity. Although there has been fast-growing interest in the design of smart appliances and environments, there has been little study on the dependability issues, which is essential to making home networking part of our daily lives. The heterogeneity of various in-home networks, the undependable nature of consumer devices, and the lack of knowledgeable system administrators in the home environment introduce both opportunities and challenges for dependability research. We report the dependability problems we encountered and the solutions we adopted in the deployment of the Aladdin home networking system. We propose the use of a soft-state store as a shared heartbeat infrastructure for monitoring the health of diverse hardware and software entities. We also describe a system architecture for connecting powerline devices to enhance dependability, and a monitoring tool for detecting unusual powerline activities potentially generated by intruders, interferences, or ill-behaved devices. | ['Yi-Min Wang', 'Wilf G. Russell', 'Anish Arora', 'Jun Xu', 'R.K. Jagannatthan'] | Towards dependable home networking: an experience report | 319,335 |
We consider the issues of developing innovative theoretical approaches to describe the states and dynamics of public mood and opinions in social networks, and further on envision practical application of the models obtained with the help of sociological research to forecast people's behaviour and management of various communities in society. The paper amply demonstrates the workability of percolation models for such purposes. For example, they can define the threshold values for negative attitudes in society and help to study how society clusters into particular social groups united by people's opinions and attitudes. The obtained percolation models manifest that the 0.09 to 0.15 share of people having a negative standpoint is critical for arising social upheavals conditions. When society reaches a certain share of people having some attitudes there may appear a situation when opposite attitudes (either negative or positive) can be fully mutually supressed or vice versa can strengthen thus reaching the percolation threshold. | ['S.A. Lesko', 'D.O. Zhukov'] | Percolation Models of Information Dissemination in Social Networks | 729,945 |
OBJECTIVE: A fully immersive, high-fidelity street-crossing simulator was used to examine the effects of texting on pedestrian street-crossing performance. BACKGROUND: Research suggests that street-crossing performance is impaired when pedestrians engage in cell phone conversations. Less is known about the impact of texting on street-crossing performance. METHOD: Thirty-two young adults completed three distraction conditions in a simulated street-crossing task: no distraction, phone conversation, and texting. A hands-free headset and a mounted tablet were used to conduct the phone and texting conversations, respectively. Participants moved through the virtual environment via a manual treadmill, allowing them to select crossing gaps and change their gait. RESULTS: During the phone conversation and texting conditions, participants had fewer successful crossings and took longer to initiate crossing. Furthermore, in the texting condition, smaller percentage of time with head orientation toward the tablet, fewer number of head orientations toward the tablet, and greater percentage of total characters typed before initiating crossing predicted greater crossing success. CONCLUSION: Our results suggest that (a) texting is as unsafe as phone conversations for street-crossing performance and (b) when subjects completed most of the texting task before initiating crossing, they were more likely to make it safely across the street. APPLICATION: Sending and receiving text messages negatively impact a range of real-world behaviors. These results may inform personal and policy decisions. Keywords: Driver distraction; Language: en | ['Sarah E. Banducci', 'Nathan Ward', 'John G. Gaspar', 'Kurt Schab', 'James A. Crowell', 'Henry Kaczmarski', 'Arthur F. Kramer'] | The effects of cell phone and text message conversations on simulated street crossing | 547,152 |
The pivotal proposal of this work is to present a reliable algorithm based on the local fractional homotopy perturbation Sumudu transform technique for solving a local fractional Tricomi equation occurring in fractal transonic flow. The proposed technique provides the results without any transformation of the equation into discrete counterparts or imposing restrictive assumptions and is completely free of round-off errors. The results of the scheme show that the approach is straightforward to apply and computationally very user-friendly and accurate. | ['Jagdev Singh', 'Devendra Kumar', 'Juan J. Nieto'] | A Reliable Algorithm for a Local Fractional Tricomi Equation Arising in Fractal Transonic Flow | 777,319 |
The idea of integrating users into a co-design process as part of a mass customization strategy is a promising approach for companies being forced to react to the growing individualization of demand. Compared to the rather huge amount of literature on manufacturing and information systems for mass customization, only little research discusses the role of the customer within the co-design process. Customers face new uncertainties and risks, coined “mass confusion” in this paper, when acting as co-designers. Building on a construction strategy of empirical management research in the form of six case studies, we propose the use of online communities for collaborative customer co-design in order to reduce the mass confusion phenomenon. In doing so, the paper challenges the assumption made by most mass customization researchers that offering customized products requires an individual (one-to-one) relationship between customer and supplier. The objective of the paper is to build and explore the idea of communities for customer co-design and transfer established knowledge on community support to this new area of application. | ['Frank T. Piller', 'Petra Schubert', 'Michael Koch', 'Kathrin M. Möslein'] | Overcoming Mass Confusion: Collaborative Customer Co-Design in Online Communities | 179,090 |
In this paper, a robust M-estimate adaptive filter for impulse noise suppression is proposed. The objective function used is based on a robust M-estimate. It has the ability to ignore or down weight large signal error when certain thresholds are exceeded. A systematic method for estimating such thresholds is also proposed. An advantage of the proposed method is that its solution is governed by a system of linear equations. Therefore, fast adaptation algorithms for traditional linear adaptive filters can be applied. In particular, a M-estimate recursive least square (M-RLS) adaptive algorithm is studied in detail. Simulation results show that it is more robust against individual and consecutive impulse noise than the MN-LMS and the N-RLS algorithms. It also has fast convergence speed and a low steady state error similar to its RLS counterpart. | ['Yuexian Zou', 'S.C. Chan', 'Tung-Sang Ng'] | A robust M-estimate adaptive filter for impulse noise suppression | 162,339 |
A novel contrast enhancement algorithm based on the layered difference representation is proposed in this work. We first represent gray-level differences at multiple layers in a tree-like structure. Then, based on the observation that gray-level differences, occurring more frequently in the input image, should be more emphasized in the output image, we solve a constrained optimization problem to derive the transformation function at each layer. Finally, we aggregate the transformation functions at all layers into the overall transformation function. Simulation results demonstrate that the proposed algorithm enhances images efficiently in terms of both objective quality and subjective quality. | ['Chulwoo Lee', 'Chul Lee', 'Chang-Su Kim'] | Contrast enhancement based on layered difference representation | 286,252 |
In this paper we address the effects of environmental mobility, that is, the ambient motion of entities like people and vehicles in the vicinity of wireless communication, on the channel characteristics and wireless network performance. We present a three step process of measurements, modeling and network simulations to quantify the significance of environmental mobility. Our field experiments show that presence of people not only cause deep fades but also distorts the fading distribution. We model the shadowing loss by three knife-edge diffraction model and propose a two-state Markov process channel fading behavior. The models are validated against measurement data and implemented in a network simulator. The models are scalable and incur execution overhead less than 15%. We also show the impact of environment mobility on protocol performance by means of two simulation case studies. We show that MAC layer data rate adaptation behavior is sensitive to environmental mobility and can result in 40% packets being delivered at lower rates. Second study on ad-hoc network performance show the throughput is decreased by 20%. We have identified that with environmental mobility the links are more sensitive to interference and the routes are less stable | ['Maneesh Varshney', 'Zhengrong Ji', 'Mineo Takai', 'Rajive L. Bagrodia'] | Modeling environmental mobility and its effect on network protocol stack | 347,312 |
['Zhi Li', 'Yubao Sun', 'Feng Wang', 'Qingshan Liu'] | Convolutional Neural Networks for Clothes Categories | 558,119 |
|
This paper focuses on the performance of wireless sensor networks characterized by a hybrid topology composed of mobile and static sensor nodes. The Routing Protocol for Low power and lossy networks (RPL), which is standardized as an IPv6 routing protocol for low power and lossy networks, uses the trickle timer algorithm to handle changes in the network topology. However, this algorithm is not well adapted to dynamic environments. This paper enhances the trickle timer in order to fit with mobility requirements. Most of previous works have improved this algorithm without considering the random movement of nodes. In this work, the proposed timer algorithm takes into consideration the random trajectory of mobile nodes, pause time and node's velocity. It is also dynamically adjusted to prevent from node disconnections. The performance of the modified protocol is evaluated and compared with native RPL, MERPL and RPL with reverse Trickle. The results show that our protocol optimization offers better performance. | ['Fatma Gara', 'Leila Ben Saad', 'Elyes Ben Hamida', 'Bernard Tourancheau', 'Rahma Ben Ayed'] | An adaptive timer for RPL to handle mobility in wireless sensor networks | 897,894 |
UWB positioning systems are able to provide outstanding accuracy and precision of localization. However, system performance strongly depends on the quality of reference clock oscillators installed in system devices and accuracy of anchors synchronization. The paper describes a new transmission scheme providing wireless infrastructure nodes synchronization and reduction of errors caused by tag's clock oscillator tolerance and stability. The paper contains description of the positioning system architecture and the algorithm for time difference of arrival determination. The proposed scheme was investigated, the results of simulations are included in the paper. | ['Vitomir Djaja-Josko', 'Jerzy Kolakowski'] | A new transmission scheme for wireless synchronization and clock errors reduction in UWB positioning system | 940,521 |
Let (G) denote the domination number of a graph G and let GH denote the Cartesian product of graphsG andH. We prove that(G)(H) 2(GH) for all simple graphs G and H. 2000 Mathematics Subject Classications: Primary 05C69, Secondary 05C35 We use V (G), E(G), (G), respectively, to denote the vertex set, edge set and domination number of the (simple) graph G. For a pair of graphs G and H ,t he Cartesian product GH of G and H is the graph with vertex set V (G)V (H )a nd where two vertices are adjacent if and only if they are equal in one coordinate and adjacent in the other. In 1963, V. G. Vizing [2] conjectured that for any graphs G and H, | ['W. Edwin Clark', 'Stephen Suen'] | Inequality Related to Vizing's Conjecture | 544,950 |
The problem of selecting small groups of itemsets that represent the data well has recently gained a lot of attention. We approach the problem by searching for the itemsets that compress the data efficiently. As a compression technique we use decision trees combined with a refined version of MDL. More formally, assuming that the items are ordered, we create a decision tree for each item that may only depend on the previous items. Our approach allows us to find complex interactions between the attributes, not just co-occurrences of 1s. Further, we present a link between the itemsets and the decision trees and use this link to export the itemsets from the decision trees. In this paper we present two algorithms. The first one is a simple greedy approach that builds a family of itemsets directly from data. The second one, given a collection of candidate itemsets, selects a small subset of these itemsets. Our experiments show that these approaches result in compact and high quality descriptions of the data. | ['Nikolaj Tatti', 'Jilles Vreeken'] | Finding Good Itemsets by Packing Data | 283,386 |
Many techniques have been proposed for mining sequential patterns in data streams. Nevertheless, the characteristics of these sequential patterns may change over time. For example, the sequential patterns may appear frequently in one time period, but rarely in others. However, most existing mining techniques ignore the changes which take place in sequential patterns over time, or use only a simple static decay function to assign a greater importance to the more recent data in streams. Accordingly, this study proposes an adaptive model for mining the changes in sequential patterns of streams. In this model, the current and cumulative mining results for sequential patterns within streams are found, and the significant change patterns and corresponding degree of change are identified. The degree of change between the current sequential patterns and those in the next mining round is then predicted, and the decay rate modified accordingly. The experimental results confirm the ability of the proposed model to automatically tune the decay rate in accordance with the present state of data stream and the predicted degree of change of sequential patterns in the following mining round. | ['I-Hui Li', 'Jyun-Yao Huang', 'I-En Liao'] | Mining Sequential Pattern Changes | 563,077 |
['Giusi Castiglione', 'Antonio Restivo'] | L-Convex Polyominoes: A Survey. | 758,245 |
|
Power consumption of disk systems is an important issue in scientific computing where data-intensive applications exercise disk storage extensively. While one can spin down idle disks when idleness is detected, spinning up them takes many cycles and consumes extra power. Therefore, it can be very useful in practice to improve disk reuse, that is, using the same set of disks as much as possible. If this can be achieved, unused disks can be held in the so called spin-down mode for longer durations of time, and this helps increase power savings. This paper proposes an approach for reducing disk power consumption by increasing disk reuse. The proposed approach restructures a given application code considering the disk layouts of the datasets it manipulates. We implemented this disk layout-conscious approach within a publicly-available compilation framework and compared it against a conventional data reuse optimization approach (which is also implemented using the same compiler) using six scientific applications that perform disk I/O. The results collected so far indicate that our layout-conscious approach and the conventional data reuse optimization approach reduce the disk energy consumption by 25.3% and 10.3%, respectively, on average, over the case where no disk power optimization is applied. The corresponding savings in total energy consumption (including CPU, memory and network energies) are 6.5% for the conventional approach and 16.5% for our disk layout-conscious approach. Our experimental evaluation also shows that the savings obtained are consistent with varying number of disks and alternate disk layouts. | ['Mahmut T. Kandemir', 'Seung Woo Son', 'Mustafa Karaköy'] | Improving disk reuse for reducing power consumption | 465,936 |
['Nadya Vasilyeva', 'Daniel A. Wilkenfeld', 'Tania Lombrozo'] | Goals Affect the Perceived Quality of Explanations. | 766,371 |
|
Presents a group critic system for object-oriented analysis and design. A group critic system is a critiquing system which is aware that the problems it finds in the design are the result of different users acting on different goals, and all are responsible for the problem. The environment also integrates a construction kit and an argumentative hypermedia system. We used annotation to point out criticisms, so that users can view the critiquing system as a true colleague. Annotations are also used as the cooperation medium among the designers. | ['Cleidson R. B. de Souza', 'S Jair Ferreira', 'Kléder Miranda Gonçalves', 'Jacques Wainer'] | A group critic system for object-oriented analysis and design | 134,477 |
Addressee detection (AD) is an important problem for dialog systems in human-humancomputer scenarios (contexts involving multiple people and a system) because systemdirected speech must be distinguished from human-directed speech. Recent work on AD (Shriberg et al., 2012) showed good results using prosodic and lexical features trained on in-domain data. In-domain data, however, is expensive to collect for each new domain. In this study we focus on lexical models and investigate how well out-of-domain data (either outside the domain, or from single-user scenarios) can fill in for matched in-domain data. We find that human-addressed speech can be modeled using out-of-domain conversational speech transcripts, and that human-computer utterances can be modeled using single-user data: the resulting AD system outperforms a system trained only on matched in-domain data. Further gains (up to a 4% reduction in equal error rate) are obtained when in-domain and out-of-domain models are interpolated. Finally, we examine which parts of an utterance are most useful. We find that the first 1.5 seconds of an utterance contain most of the lexical information for AD, and analyze which lexical items convey this. Overall, we conclude that the H-H-C scenario can be approximated by combining data from H-C and H-H scenarios only. | ['Heeyoung Lee', 'Andreas Stolcke', 'Elizabeth Shriberg'] | Using Out-of-Domain Data for Lexical Addressee Detection in Human-Human-Computer Dialog | 616,390 |