_id
stringlengths 40
40
| text
stringlengths 0
10k
|
---|---|
2b211f9553ec78ff17fa3ebe16c0a036ef33c54b | Marko A. Rodriguez is graph systems architect at AT&T Interactive. He can be reached at marko<at>markorodriguez.com. Peter Neubauer is chief operating officer of Neo Technology. He can be reached at peter.neubauer<at>neotechnology.com A graph is a data structure composed of dots (i.e., vertices) and lines (i.e., edges). The dots and lines of a graph can be organized into intricate arrangements. A graph’s ability to denote objects and their relationships to one another allows for a surprisingly large number of things to be modeled as graphs. From the dependencies that link software packages to the wood beams that provide the framing to a house, most anything has a corresponding graph representation. However, just because it is possible to represent something as a graph does not necessarily mean that its graph representation will be useful. If a modeler can leverage the plethora of tools and algorithms that store and process graphs, then such a mapping is worthwhile. This article explores the world of graphs in computing and exposes situations in which graphical models are beneficial. |
0c5e3186822a3d10d5377b741f36b6478d0a8667 | A central problem in artificial intelligence is that of planning to maximize future reward under uncertainty in a partially observable environment. In this paper we propose and demonstrate a novel algorithm which accurately learns a model of such an environment directly from sequences of action-observation pairs. We then close the loop from observations to actions by planning in the learned model and recovering a policy which is near-optimal in the original environment. Specifically, we present an efficient and statistically consistent spectral algorithm for learning the parameters of a Predictive State Representation (PSR). We demonstrate the algorithm by learning a model of a simulated high-dimensional, vision-based mobile robot planning task, and then perform approximate point-based planning in the learned PSR. Analysis of our results shows that the algorithm learns a state space which efficiently captures the essential features of the environment. This representation allows accurate prediction with a small number of parameters, and enables successful and efficient planning. |
16611312448f5897c7a84e2f590617f4fa3847c4 | Hidden Markov Models (HMMs) are one of the most fundamental a nd widely used statistical tools for modeling discrete time series. Typically, they are learned using sea rch heuristics (such as the Baum-Welch / EM algorithm), which suffer from the usual local optima issues. While in gen eral these models are known to be hard to learn with samples from the underlying distribution, we provide t h first provably efficient algorithm (in terms of sample and computational complexity) for learning HMMs under a nat ur l separation condition. This condition is roughly analogous to the separation conditions considered for lear ning mixture distributions (where, similarly, these model s are hard to learn in general). Furthermore, our sample compl exity results do not explicitly depend on the number of distinct (discrete) observations — they implicitly depend on this number through spectral properties of the underlyin g HMM. This makes the algorithm particularly applicable to se ttings with a large number of observations, such as those in natural language processing where the space of observati on is sometimes the words in a language. Finally, the algorithm is particularly simple, relying only on a singula r value decomposition and matrix multiplications. |
8ad6fda2d41dd823d2569797c8c7353dad31b371 | We construct an Attribute-Based Encryption (ABE) scheme that allows a user's private key to be expressed in terms of any access formula over attributes. Previous ABE schemes were limited to expressing only monotonic access structures. We provide a proof of security for our scheme based on the Decisional Bilinear Diffie-Hellman (BDH) assumption. Furthermore, the performance of our new scheme compares favorably with existing, less-expressive schemes. |
4f3dbfec5c67f0fb0602d9c803a391bc2f6ee4c7 | A 20-GHz phase-locked loop with 4.9 ps/sub pp//0.65 ps/sub rms/ jitter and -113.5 dBc/Hz phase noise at 10-MHz offset is presented. A half-duty sampled-feedforward loop filter that simply replaces the resistor with a switch and an inverter suppresses the reference spur down to -44.0 dBc. A design iteration procedure is outlined that minimizes the phase noise of a negative-g/sub m/ oscillator with a coupled microstrip resonator. Static frequency dividers made of pulsed latches operate faster than those made of flip-flops and achieve near 2:1 frequency range. The phase-locked loop fabricated in a 0.13-/spl mu/m CMOS operates from 17.6 to 19.4GHz and dissipates 480mW. |
1fcaf7ddcadda724d67684d66856c107375f448b | We present a new Convolutional Neural Network (CNN) model for text classification that jointly exploits labels on documents and their constituent sentences. Specifically, we consider scenarios in which annotators explicitly mark sentences (or snippets) that support their overall document categorization, i.e., they provide rationales. Our model exploits such supervision via a hierarchical approach in which each document is represented by a linear combination of the vector representations of its component sentences. We propose a sentence-level convolutional model that estimates the probability that a given sentence is a rationale, and we then scale the contribution of each sentence to the aggregate document representation in proportion to these estimates. Experiments on five classification datasets that have document labels and associated rationales demonstrate that our approach consistently outperforms strong baselines. Moreover, our model naturally provides explanations for its predictions. |
20b41b2a0d8ee71efd3986b4baeed24eba904350 | OBJECTIVE
To investigate the relationship between maternal depression and child growth in developing countries through a systematic literature review and meta-analysis.
METHODS
Six databases were searched for studies from developing countries on maternal depression and child growth published up until 2010. Standard meta-analytical methods were followed and pooled odds ratios (ORs) for underweight and stunting in the children of depressed mothers were calculated using random effects models for all studies and for subsets of studies that met strict criteria on study design, exposure to maternal depression and outcome variables. The population attributable risk (PAR) was estimated for selected studies.
FINDINGS
Seventeen studies including a total of 13,923 mother and child pairs from 11 countries met inclusion criteria. The children of mothers with depression or depressive symptoms were more likely to be underweight (OR: 1.5; 95% confidence interval, CI: 1.2-1.8) or stunted (OR: 1.4; 95% CI: 1.2-1.7). Subanalysis of three longitudinal studies showed a stronger effect: the OR for underweight was 2.2 (95% CI: 1.5-3.2) and for stunting, 2.0 (95% CI: 1.0-3.9). The PAR for selected studies indicated that if the infant population were entirely unexposed to maternal depressive symptoms 23% to 29% fewer children would be underweight or stunted.
CONCLUSION
Maternal depression was associated with early childhood underweight and stunting. Rigorous prospective studies are needed to identify mechanisms and causes. Early identification, treatment and prevention of maternal depression may help reduce child stunting and underweight in developing countries. |
c596f88ccba5b7d5276ac6a9b68972fd7d14d959 | By bringing together the physical world of real objects with the virtual world of IT systems, the Internet of Things has the potential to significantly change both the enterprise world as well as society. However, the term is very much hyped and understood differently by different communities, especially because IoT is not a technology as such but represents the convergence of heterogeneous - often new - technologies pertaining to different engineering domains. What is needed in order to come to a common understanding is a domain model for the Internet of Things, defining the main concepts and their relationships, and serving as a common lexicon and taxonomy and thus as a basis for further scientific discourse and development of the Internet of Things. As we show, having such a domain model is also helpful in design of concrete IoT system architectures, as it provides a template and thus structures the analysis of use cases. |
5a9f4dc3e5d7c70d58c9512d7193d079c3331273 | We advocate the use of Gaussian Process Dynamical Models (GPDMs) for learning human pose and motion priors for 3D people tracking. A GPDM provides a lowdimensional embedding of human motion data, with a density function that gives higher probability to poses and motions close to the training data. With Bayesian model averaging a GPDM can be learned from relatively small amounts of data, and it generalizes gracefully to motions outside the training set. Here we modify the GPDM to permit learning from motions with significant stylistic variation. The resulting priors are effective for tracking a range of human walking styles, despite weak and noisy image measurements and significant occlusions. |
c3f2d101b616d82d07ca2cc4cb8ed0cb53fde21f | We conducted human study to provide reference to our current CD and EMD values reported on the rendered dataset. We provided the human subject with a GUI tool to create a triangular mesh from the image. The tool (see Fig 1) enables the user to edit the mesh in 3D and to align the modeled object back to the input image. In total 16 models are created from the input images of our validation set. N = 1024 points are sampled from each model. |
32791996c1040b9dcc34e71a05d72e5c649eeff9 | Ambulatory electrocardiography is increasingly being used in clinical practice to detect abnormal electrical behavior of the heart during ordinary daily activities. The utility of this monitoring can be improved by deriving respiration, which previously has been based on overnight apnea studies where patients are stationary, or the use of multilead ECG systems for stress testing. We compared six respiratory measures derived from a single-lead portable ECG monitor with simultaneously measured respiration air flow obtained from an ambulatory nasal cannula respiratory monitor. Ten controlled 1-h recordings were performed covering activities of daily living (lying, sitting, standing, walking, jogging, running, and stair climbing) and six overnight studies. The best method was an average of a 0.2-0.8 Hz bandpass filter and RR technique based on lengthening and shortening of the RR interval. Mean error rates with the reference gold standard were plusmn4 breaths per minute (bpm) (all activities), plusmn2 bpm (lying and sitting), and plusmn1 breath per minute (overnight studies). Statistically similar results were obtained using heart rate information alone (RR technique) compared to the best technique derived from the full ECG waveform that simplifies data collection procedures. The study shows that respiration can be derived under dynamic activities from a single-lead ECG without significant differences from traditional methods. |
7eac1eb85b919667c785b9ac4085d8ca68998d20 | Education and training is the process by which the wisdom, knowledge and skills of one generation are passed on to the next. Today there are two form s f education and training: conventional education and distance education. Mobile learning, or "M-Lear ning", offers modern ways to support learning process through mobile devices, such as handheld an d tablet computers, MP3 players, smart phones and mobile phones. This document introduces the sub ject of mobile learning for education purposes. It examines what impact mobile devices have had on teaching and learning practices and goes on to look at the opportunities presented by the use of d igital media on mobile devices. The main purpose of this paper is to describe the current state of mobi le learning, benefits, challenges, and it’s barrier s to support teaching and learning. Data for this paper w re collected through bibliographic and internet research from January to March 2013. Four key areas will be addressed in this paper: 1. An analysis of Mobile Learning. 2. Differentiating E-Learning from Mobile Learning 3. Value and Benefits of Mobile Learning 4. Challenges and Barriers of M obile Learning: Study showed that M-Learning as a Distance learning brought great benefits to socie ty include : Training when it is needed, Training a t any time; Training at any place; Learner-centred co ntent; Avoidance of re-entry to work problems; Training for taxpayers, and those fully occupied du ring university lectures and sessions at training centres; and The industrialisation of teaching and learning. And also, notebooks, mobile Tablets, iPod touch, and iPads are very popular devices for mobil e learning because of their cost and availability o f apps. -------------------------------- |
57820e6f974d198bf4bbdf26ae7e1063bac190c3 | |
8e393c18974baa8d5d704edaf116f009cb919463 | A high-speed SerDes must meet multiple challenges including high-speed operation, intensive equalization technique, low power consumption, small area and robustness. In order to meet new standards, such a OIF CEI-25G-LR, CEI-28G-MR/SR/VSR, IEEE802.3bj and 32G-FC, data-rates are increased to 25 to 28Gb/s, which is more than 75% higher than the previous generation of SerDes. For SerDes applications with several hundreds of lanes integrated in single chip, power consumption is very important factor while maintaining high performance. There are several previous works at 28Gb/s or higher data-rate [1-2]. They use an unrolled DFE to meet the critical timing margin, but the unrolled DFE structure increases the number of DFE slicers, increasing the overall power and die area. In order to tackle these challenges, we introduce several circuits and architectural techniques. The analog front-end (AFE) uses a single-stage architecture and a compact on-chip passive inductor in the transimpedance amplifier (TIA), providing 15dB boost. The boost is adaptive and its adaptation loop is decoupled from the decision-feedback equalizer (DFE) adaptation loop by the use of a group-delay adaptation (GDA) algorithm. DFE has a half-rate 1-tap unrolled structure with 2 total error latches for power and area reduction. A two-stage sense-amplifier-based slicer achieves a sensitivity of 15mV and DFE timing closure. We also develop a high-speed clock buffer that uses a new active-inductor circuit. This active-inductor circuit has the capability to control output-common-mode voltage to optimize circuit operating points. |
505c58c2c100e7512b7f7d906a9d4af72f6e8415 | Page ii Complex Adaptive Systems John H. Holland, Christopher Langton, and Stewart W. Wilson, advisors Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence, MIT Press edition John H. Holland Toward a Practice of Autonomous Systems: Proceedings of the First European Conference on Artificial Life edited by Francisco J. Varela and Paul Bourgine Genetic Programming: On the Programming of Computers by Means of Natural Selection John R. Koza |
3a46c11ad7afed8defbb368e478dbf94c24f43a3 | Scientific problems that depend on processing largeamounts of data require overcoming challenges in multiple areas:managing large-scale data distribution, co-placement andscheduling of data with compute resources, and storing and transferringlarge volumes of data. We analyze the ecosystems of thetwo prominent paradigms for data-intensive applications, hereafterreferred to as the high-performance computing and theApache-Hadoop paradigm. We propose a basis, common terminologyand functional factors upon which to analyze the two approachesof both paradigms. We discuss the concept of "Big DataOgres" and their facets as means of understanding and characterizingthe most common application workloads found acrossthe two paradigms. We then discuss the salient features of thetwo paradigms, and compare and contrast the two approaches.Specifically, we examine common implementation/approaches ofthese paradigms, shed light upon the reasons for their current"architecture" and discuss some typical workloads that utilizethem. In spite of the significant software distinctions, we believethere is architectural similarity. We discuss the potential integrationof different implementations, across the different levelsand components. Our comparison progresses from a fully qualitativeexamination of the two paradigms, to a semi-quantitativemethodology. We use a simple and broadly used Ogre (K-meansclustering), characterize its performance on a range of representativeplatforms, covering several implementations from bothparadigms. Our experiments provide an insight into the relativestrengths of the two paradigms. We propose that the set of Ogreswill serve as a benchmark to evaluate the two paradigms alongdifferent dimensions. |
dc7024840a4ba7ab634517fae53e77695ff5dda9 | In this paper we propose a novel energy efficient approach for the recognition of human activities using smartphones as wearable sensing devices, targeting assisted living applications such as remote patient activity monitoring for the disabled and the elderly. The method exploits fixed-point arithmetic to propose a modified multiclass Support Vector Machine (SVM) learning algorithm, allowing to better preserve the smartphone battery lifetime with respect to the conventional floating-point based formulation while maintaining comparable system accuracy levels. Experiments show comparative results between this approach and the traditional SVM in terms of recognition performance and battery consumption, highlighting the advantages of the proposed method. |
f4cdd1d15112a3458746b58a276d97e79d8f495d | Regularizing the gradient norm of the output of a neural network with respect to its inputs is a powerful technique, rediscovered several times. This paper presents evidence that gradient regularization can consistently improve classification accuracy on vision tasks, using modern deep neural networks, especially when the amount of training data is small. We introduce our regularizers as members of a broader class of Jacobian-based regularizers. We demonstrate empirically on real and synthetic data that the learning process leads to gradients controlled beyond the training points, and results in solutions that generalize well. |
984df1f081fbd623600ec45635e5d9a4811c0aef | Two Vivaldi antenna arrays have been presented. First is an 8-element tapered slot array covering 1.2 to 4 GHz band for STW applications for brick/concrete wall imaging. Second is a 16-element antipodal array operating at 8 to 10.6 GHz for high resolution imaging when penetrating through dry wall. Based on the two designs, and utilizing a smooth wide band slot to microstrip transition to feed the Vivaldi antenna array, a 1–10 GHz frequency band can be covered. Alternatively, the design can be used in a reconfigurable structure to cover either a 1–3 GHz or 8–10 GHz band. Experimental and measured results have been completed and will be discussed in detail. The designs will significantly impact the development of compact reconfigurable and portable systems. |
e3f4fdf6d2f10ebe4cfc6d0544afa63976527d60 | This paper presents a 324-element 2-D broadside array for radio astronomy instrumentation which is sensitive to two mutually orthogonal polarizations. The array is composed of cruciform units consisting of a group of four Vivaldi antennas arranged in a cross-shaped structure. The Vivaldi antenna used in this array exhibits a radiation intensity characteristic with a symmetrical main beam of 87.5° at 3 GHz and 44.2° at 6 GHz. The measured maximum side/backlobe level is 10.3 dB below the main beam level. The array can operate at a high frequency of 5.4 GHz without the formation of grating lobes. |
1a090df137014acab572aa5dc23449b270db64b4 | |
9ae252d3b0821303f8d63ba9daf10030c9c97d37 | We propose a novel approach to learn and recognize natural scene categories. Unlike previous work, it does not require experts to annotate the training set. We represent the image of a scene by a collection of local regions, denoted as codewords obtained by unsupervised learning. Each region is represented as part of a "theme". In previous work, such themes were learnt from hand-annotations of experts, while our method learns the theme distributions as well as the codewords distribution over the themes without supervision. We report satisfactory categorization performances on a large set of 13 categories of complex scenes. |
fa6cbc948677d29ecce76f1a49cea01a75686619 | In this paper, we propose a computational model of the recognition of real world scenes that bypasses the segmentation and the processing of individual objects or regions. The procedure is based on a very low dimensional representation of the scene, that we term the Spatial Envelope. We propose a set of perceptual dimensions (naturalness, openness, roughness, expansion, ruggedness) that represent the dominant spatial structure of a scene. Then, we show that these dimensions may be reliably estimated using spectral and coarsely localized information. The model generates a multidimensional space in which scenes sharing membership in semantic categories (e.g., streets, highways, coasts) are projected closed together. The performance of the spatial envelope model shows that specific information about object shape or identity is not a requirement for scene categorization and that modeling a holistic representation of the scene informs about its probable semantic category. |
1ac52b7d8db223029388551b2db25657ed8c9852 | In this paper, we propose a machine-learning solution to problems consisting of many similar prediction tasks. Each of the individual tasks has a high risk of overrtting. We combine two types of knowledge transfer between tasks to reduce this risk: multi-task learning and hierarchical Bayesian modeling. Multi-task learning is based on the assumption that there exist features typical to the task at hand. To nd these features, we train a huge two-layered neural network. Each task has its own output, but shares the weights from the input to the hidden units with all other tasks. In this way a relatively large set of possible explanatory variables (the network inputs) is reduced to a smaller and easier to handle set of features (the hidden units). Given this set of features and after an appropriate scale transformation, we assume that the tasks are exchangeable. This assumption allows for a hierarchical Bayesian analysis in which the hyperparameters can be estimated from the data. EEectively, these hyperpa-rameters act as regularizers and prevent over-tting. We describe how to make the system robust against nonstationarities in the time series and give directions for further improvement. We illustrate our ideas on a database regarding the prediction of newspaper sales. |
1e56ed3d2c855f848ffd91baa90f661772a279e1 | We propose a generative model for text and other collections of discrete data that generalizes or improves on several previous models including naive Bayes/unigram, mixture of unigrams [6], and Hofmann's aspect model , also known as probabilistic latent semantic indexing (pLSI) [3]. In the context of text modeling, our model posits that each document is generated as a mixture of topics, where the continuous-valued mixture proportions are distributed as a latent Dirichlet random variable. Inference and learning are carried out efficiently via variational algorithms. We present empirical results on applications of this model to problems in text modeling, collaborative filtering, and text classification. |
e990a41e8f09e0ef4695c39af351bf25f333eefa | |
1f8116db538169de3553b1091e82107f7594301a | |
539ea86fa738afd939fb18566107c971461f8548 | Mappings to structured output spaces (strings, trees, partitions, etc.) are typically learned using extensions of classification algorithms to simple graphical structures (eg., linear chains) in which search and parameter estimation can be performed exactly. Unfortunately, in many complex problems, it is rare that exact search or parameter estimation is tractable. Instead of learning exact models and searching via heuristic means, we embrace this difficulty and treat the structured output problem in terms of approximate search. We present a framework for learning as search optimization, and two parameter updates with convergence the-orems and bounds. Empirical evidence shows that our integrated approach to learning and decoding can outperform exact models at smaller computational cost. |
1219fb39b46aabd74879a7d6d3c724fb4e55aeae | We develop a perspective on technology entrepreneurship as involving agency that is distributed across different kinds of actors. Each actor becomes involved with a technology, and, in the process, generates inputs that result in the transformation of an emerging technological path. The steady accumulation of inputs to a technological path generates a momentum that enables and constrains the activities of distributed actors. In other words, agency is not only distributed, but it is embedded as well. We explicate this perspective through a comparative study of processes underlying the emergence of wind turbines in Denmark and in United States. Through our comparative study, we flesh out “bricolage” and “breakthrough” as contrasting approaches to the engagement of actors in shaping technological paths. © 2002 Elsevier Science B.V. All rights reserved. |
2266636d87e44590ade738b92377d1fe1bc5c970 | |
2af586c64c32baeb445992e0ea6b76bbbbc30c7f | |
0e8b8e0c37b0ebc9c36b99103a487dbbbdf9ee97 | |
2c03df8b48bf3fa39054345bafabfeff15bfd11d | Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers - 8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation. |
5763c2c62463c61926c7e192dcc340c4691ee3aa | We propose a deep learning method for single image superresolution (SR). Our method directly learns an end-to-end mapping between the low/high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) [15] that takes the lowresolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage. |
2db168f14f3169b8939b843b9f4caf78c3884fb3 | In this letter, a broadband bent triangular omnidirectional antenna is presented for RF energy harvesting. The antenna has a bandwidth for VSWR ≤ 2 from 850 MHz to 1.94 GHz. The antenna is designed to receive both horizontal and vertical polarized waves and has a stable radiation pattern over the entire bandwidth. Antenna has also been optimized for energy harvesting application and it is designed for 100 Ω input impedance to provide a passive voltage amplification and impedance matching to the rectifier. A peak efficiency of 60% and 17% is obtained for a load of 500 Ω at 980 and 1800 MHz, respectively. At a cell site while harvesting all bands simultaneously a voltage of 3.76 V for open circuit and 1.38 V across a load of 4.3 k Ω is obtained at a distance of 25 m using an array of two elements of the rectenna. |
484ac571356251355d3e24dcb23bdd6d0911bd94 | Recent scientific and technological advances have witnessed an abundance of structural patterns modeled as graphs. As a result, it is of special interest to process graph containment queries effectively on large graph databases. Given a graph database G, and a query graph q, the graph containment query is to retrieve all graphs in G which contain q as subgraph(s). Due to the vast number of graphs in G and the nature of complexity for subgraph isomorphism testing, it is desirable to make use of high-quality graph indexing mechanisms to reduce the overall query processing cost. In this paper, we propose a new cost-effective graph indexing method based on frequent tree-features of the graph database. We analyze the effectiveness and efficiency of tree as indexing feature from three critical aspects: feature size, feature selection cost, and pruning power. In order to achieve better pruning ability than existing graph-based indexing methods, we select, in addition to frequent tree-features (Tree), a small number of discriminative graphs (∆) on demand, without a costly graph mining process beforehand. Our study verifies that (Tree+∆) is a better choice than graph for indexing purpose, denoted (Tree+∆ ≥Graph), to address the graph containment query problem. It has two implications: (1) the index construction by (Tree+∆) is efficient, and (2) the graph containment query processing by (Tree+∆) is efficient. Our experimental studies demonstrate that (Tree+∆) has a compact index structure, achieves an order of magnitude better performance in index construction, and most importantly, outperforms up-to-date graphbased indexing methods: gIndex and C-Tree, in graph containment query processing. |
22749899b50c5113516b9820f875a580910aa746 | A small slot-loaded patch antenna design developed for receiving both L1 and L2 bands GPS signals is discussed. The dual band coverage is achieved by using a patch mode at L2 band and a slot mode at L1 band. High dielectric material and meandered slot line are employed to reduce the antenna size down to 25.4 mm in diameter. The RHCP is achieved by combining two orthogonal modes via a small 0°-90° hybrid chip. Both patch and slot modes share a single proximity probe conveniently located on the side of the antenna (Fig.1). This paper discusses about the design procedure as well as simulated antenna performance. |
afbe59950a7d452ce0a3f412ee865f1e1d94d9ef | Reinforcement learning holds the promise of enabling autonomous robots to learn large repertoires of behavioral skills with minimal human intervention. However, robotic applications of reinforcement learning often compromise the autonomy of the learning process in favor of achieving training times that are practical for real physical systems. This typically involves introducing hand-engineered policy representations and human-supplied demonstrations. Deep reinforcement learning alleviates this limitation by training general-purpose neural network policies, but applications of direct deep reinforcement learning algorithms have so far been restricted to simulated settings and relatively simple tasks, due to their apparent high sample complexity. In this paper, we demonstrate that a recent deep reinforcement learning algorithm based on off-policy training of deep Q-functions can scale to complex 3D manipulation tasks and can learn deep neural network policies efficiently enough to train on real physical robots. We demonstrate that the training times can be further reduced by parallelizing the algorithm across multiple robots which pool their policy updates asynchronously. Our experimental evaluation shows that our method can learn a variety of 3D manipulation skills in simulation and a complex door opening skill on real robots without any prior demonstrations or manually designed representations. |
b8aa8b5d06c98a900d8cea61864669b28c3ac0fc | This article presents a comprehensive survey of routing protocols proposed for routing in Vehicular Delay Tolerant Networks (VDTN) in vehicular environment. DTNs are utilized in various operational environments, including those subject to disruption and disconnection and those with high-delay, such as Vehicular Ad-Hoc Networks (VANET). We focus on a special type of VANET, where the vehicular traffic is sparse and direct end-to-end paths between communicating parties do not always exist. Thus, communication in this context falls into the category of Vehicular Delay Tolerant Network (VDTN). Due to the limited transmission range of an RSU (Road Side Unit), remote vehicles, in VDTN, may not connect to the RSU directly and thus have to rely on intermediate vehicles to relay the packets. During the message relay process, complete end-to-end paths may not exist in highly partitioned VANETs. Therefore, the intermediate vehicles must buffer and forward messages opportunistically. Through buffer, carry and forward, the message can eventually be delivered to the destination even if an end-to-end connection never exists between source and destination. The main objective of routing protocols in DTN is to maximize the probability of delivery to the destination while minimizing the end-to-end delay. Also, vehicular traffic models are important for DTN routing in vehicle networks because the performance of DTN routing protocols is closely related to population and mobility models of the network. 2014 Elsevier B.V. All rights reserved. |
4555fd3622908e2170e4ffdd717b83518b123b09 | The paper presents the effects on antenna parameters when an antenna is placed horizontally near a metal plate. The plate has finite size and rectangular shape. A folded dipole antenna is used and it is placed symmetrically above the plate. The FEM (finite element method) is used to simulate the dependency of antenna parameters on the size of the plate and the distance between the plate and the antenna. The presence of the metal plate, even a small one if it is at the right distance, causes very big changes in the behaviour of the antenna. The bigger the plate, especially in width, the sharper and narrower are the lobes of the radiation pattern. The antenna height defines how many lobes the radiation pattern has. A number of the antenna parameters, including impedance, directivity and front-to-back ratio, change periodically as the antenna height is increased. The resonant frequency of the antenna also changes under the influence of the metal plate. |
d70cd3d2fe0a194321ee92c305976873b883d529 | A wideband 57.7–84.2 GHz Phase Shifter is presented using a compact Lange coupler to generate in-phase and quadrature signal. The Lange coupler is followed by two balun transformers that provide the IQ vector modulation with differential I and Q signals. The implemented Phase Shifter demonstrates an average 6-dB insertion loss and 5-dB gain variation. The measured average rms phase and gain errors are 7 degrees and 1 dB, respectively. The phase shifter is implemented in GlobalFoundries 45-nm SOI CMOS technology using a trap-rich substrate. The chip area is 385 μm × 285 μm and the Phase Shifter consumes less than 17 mW. To the best of authors knowledge, this is the first phase shifter that covers both 60 GHz band and E-band frequencies with a fractional bandwidth of 37%. |
eb58118b9db1e95f9792f39c3780dbba3bb966cb | This paper presents a wearable inertial measurement system and its associated spatiotemporal gait analysis algorithm to obtain quantitative measurements and explore clinical indicators from the spatiotemporal gait patterns for patients with stroke or Parkinson’s disease. The wearable system is composed of a microcontroller, a triaxial accelerometer, a triaxial gyroscope, and an RF wireless transmission module. The spatiotemporal gait analysis algorithm, consisting of procedures of inertial signal acquisition, signal preprocessing, gait phase detection, and ankle range of motion estimation, has been developed for extracting gait features from accelerations and angular velocities. In order to estimate accurate ankle range of motion, we have integrated accelerations and angular velocities into a complementary filter for reducing the accumulation of integration error of inertial signals. All 24 participants mounted the system on their foot to walk along a straight line of 10 m at normal speed and their walking recordings were collected to validate the effectiveness of the proposed system and algorithm. Experimental results show that the proposed inertial measurement system with the designed spatiotemporal gait analysis algorithm is a promising tool for automatically analyzing spatiotemporal gait information, serving as clinical indicators for monitoring therapeutic efficacy for diagnosis of stroke or Parkinson’s disease. |
7e7f14f325d7e8d70e20ca22800ad87cfbf339ff | This publication contains reprint articles for which IEEE does not hold copyright. Full text is not available on IEEE Xplore for these articles. |
002a8b9ef513d46dc8dcce85c04a87ae6a221b4c | We propose a new class of support vector algorithms for regression and classification. In these algorithms, a parameter lets one effectively control the number of support vectors. While this can be useful in its own right, the parameterization has the additional benefit of enabling us to eliminate one of the other free parameters of the algorithm: the accuracy parameter in the regression case, and the regularization constant C in the classification case. We describe the algorithms, give some theoretical results concerning the meaning and the choice of , and report experimental results. |
0911bcf6bfff20a84a56b9d448bcb3d72a1eb093 | Regularized training of an autoencoder typically results in hidden unit biases that take on large negative values. We show that negative biases are a natural result of using a hidden layer whose responsibility is to both represent the input data and act as a selection mechanism that ensures sparsity of the representation. We then show that negative biases impede the learning of data distributions whose intrinsic dimensionality is high. We also propose a new activation function that decouples the two roles of the hidden layer and that allows us to learn representations on data with very high intrinsic dimensionality, where standard autoencoders typically fail. Since the decoupled activation function acts like an implicit regularizer, the model can be trained by minimizing the reconstruction error of training data, without requiring any additional regularization. |
27f9b805de1f125273a88786d2383621e60c6094 | In this paper we propose a kinematic approach for tracked mobile robots in order to improve motion control and pose estimation. Complex dynamics due to slippage and track–soil interactions make it difficult to predict the exact motion of the vehicle on the basis of track velocities. Nevertheless, real-time computations for autonomous navigation require an effective kinematics approximation without introducing dynamics in the loop. The proposed solution is based on the fact that the instantaneous centers of rotation (ICRs) of treads on the motion plane with respect to the vehicle are dynamics-dependent, but they lie within a bounded area. Thus, optimizing constant ICR positions for a particular terrain results in an approximate kinematic model for tracked mobile robots. Two different approaches are presented for off-line estimation of kinematic parameters: (i) simulation of the stationary response of the dynamic model for the whole velocity range of the vehicle; (ii) introduction of an experimental setup so that a genetic algorithm can produce the model from actual sensor readings. These methods have been evaluated for on-line odometric computations and low-level motion control with the Aurigaα mobile robot on a hard-surface flat soil at moderate speeds. KEY WORDS—tracked vehicles, kinematic control, mobile robotics, parameter identification, dynamics simulation |
04caa1a55b12d5f3830ed4a31c4b47921a3546f2 | Kernel classifiers and regressors designed for structured data, such as sequences, trees and graphs, have significantly advanced a number of interdisciplinary areas such as computational biology and drug design. Typically, kernels are designed beforehand for a data type which either exploit statistics of the structures or make use of probabilistic generative models, and then a discriminative classifier is learned based on the kernels via convex optimization. However, such an elegant two-stage approach also limited kernel methods from scaling up to millions of data points, and exploiting discriminative information to learn feature representations. We propose, structure2vec, an effective and scalable approach for structured data representation based on the idea of embedding latent variable models into feature spaces, and learning such feature spaces using discriminative information. Interestingly, structure2vec extracts features by performing a sequence of function mappings in a way similar to graphical model inference procedures, such as mean field and belief propagation. In applications involving millions of data points, we showed that structure2vec runs 2 times faster, produces models which are 10, 000 times smaller, while at the same time achieving the state-of-the-art predictive performance. |
1dc5b2114d1ff561fc7d6163d8f4e9c905ca12c4 | It is well known that when data are nonnormally distributed, a test of the significance of Pearson's r may inflate Type I error rates and reduce power. Statistics textbooks and the simulation literature provide several alternatives to Pearson's correlation. However, the relative performance of these alternatives has been unclear. Two simulation studies were conducted to compare 12 methods, including Pearson, Spearman's rank-order, transformation, and resampling approaches. With most sample sizes (n ≥ 20), Type I and Type II error rates were minimized by transforming the data to a normal shape prior to assessing the Pearson correlation. Among transformation approaches, a general purpose rank-based inverse normal transformation (i.e., transformation to rankit scores) was most beneficial. However, when samples were both small (n ≤ 10) and extremely nonnormal, the permutation test often outperformed other alternatives, including various bootstrap tests. |
d3abb0b5b3ce7eb464846bbdfd93e0fbf505e954 | In this paper, we compare three different concepts of compact antenna arrays fed by substrate integrated waveguides (SIW). Antenna concepts differ in the type of radiators. Slots represent magnetic linear radiators, patches are electric surface radiators, and Vivaldi slots belong to travelling-wave antennas. Hence, the SIW feeders have to exploit different mechanisms of exciting antenna elements. Impedance and radiation properties of studied antenna arrays have been related to the normalized frequency. Antenna arrays have been mutually compared to show fundamental dependencies of final parameters of the designed antennas on state variables of antennas, on SIW feeder architectures and on related implementation details. |
e4acaccd3c42b618396c9c28dae64ae7091e36b8 | A novel I/Q receiver array is demonstrated that adapts phase shifts in each receive channel to point a receive beam toward an incident RF signal. The measured array operates at 8.1 GHz and covers steering angles of +/-35 degrees for a four element array. Additionally, the receiver incorporates an I/Q down-converter and demodulates 64QAM with EVM less than 4%. The chip is fabricated in 45 nm CMOS SOI process and occupies an area of 3.45 mm2 while consuming 143 mW dc power. |
149bf28af91cadf2cd933bd477599cca40f55ccd | We propose a learning architecture, that is able to do reinforcement learning based on raw visual input data. In contrast to previous approaches, not only the control policy is learned. In order to be successful, the system must also autonomously learn, how to extract relevant information out of a high-dimensional stream of input information, for which the semantics are not provided to the learning system. We give a first proof-of-concept of this novel learning architecture on a challenging benchmark, namely visual control of a racing slot car. The resulting policy, learned only by success or failure, is hardly beaten by an experienced human player. |
759d9a6c9206c366a8d94a06f4eb05659c2bb7f2 | To date, almost all experimental evaluations of machine learning-based recognition algorithms in computer vision have taken the form of “closed set” recognition, whereby all testing classes are known at training time. A more realistic scenario for vision applications is “open set” recognition, where incomplete knowledge of the world is present at training time, and unknown classes can be submitted to an algorithm during testing. This paper explores the nature of open set recognition and formalizes its definition as a constrained minimization problem. The open set recognition problem is not well addressed by existing algorithms because it requires strong generalization. As a step toward a solution, we introduce a novel “1-vs-set machine,” which sculpts a decision space from the marginal distances of a 1-class or binary SVM with a linear kernel. This methodology applies to several different applications in computer vision where open set recognition is a challenging problem, including object recognition and face verification. We consider both in this work, with large scale cross-dataset experiments performed over the Caltech 256 and ImageNet sets, as well as face matching experiments performed over the Labeled Faces in the Wild set. The experiments highlight the effectiveness of machines adapted for open set evaluation compared to existing 1-class and binary SVMs for the same tasks. |
00960cb3f5a74d23eb5ded93f1aa717b9c6e6851 | Bayesian optimization has proven to be a highly effective methodology for the global optimization of unknown, expensive and multimodal functions. The ability to accurately model distributions over functions is critical to the effectiveness of Bayesian optimization. Although Gaussian processes provide a flexible prior over functions, there are various classes of functions that remain difficult to model. One of the most frequently occurring of these is the class of non-stationary functions. The optimization of the hyperparameters of machine learning algorithms is a problem domain in which parameters are often manually transformed a priori, for example by optimizing in “log-space,” to mitigate the effects of spatially-varying length scale. We develop a methodology for automatically learning a wide family of bijective transformations or warpings of the input space using the Beta cumulative distribution function. We further extend the warping framework to multi-task Bayesian optimization so that multiple tasks can be warped into a jointly stationary space. On a set of challenging benchmark optimization tasks, we observe that the inclusion of warping greatly improves on the state-of-the-art, producing better results faster and more reliably. |
b53e4c232833a8e663a9cf15dcdd050ff801c05c | We present a scalable system for high-throughput real-time analysis of heterogeneous data streams. Our architecture enables incremental development of models for predictive analytics and anomaly detection as data arrives into the system. In contrast with batch data-processing systems, such as Hadoop, that can have high latency, our architecture allows for ingest and analysis of data on the fly, thereby detecting and responding to anomalous behavior in near real time. This timeliness is important for applications such as insider threat, financial fraud, and network intrusions. We demonstrate an application of this system to the problem of detecting insider threats, namely, the misuse of an organization's resources by users of the system and present results of our experiments on a publicly available insider threat dataset. |
39b58ef6487c893219c77c61c762eee5694d0e36 | Classi cation is an important problem in the emerging eld of data mining. Although classi cation has been studied extensively in the past, most of the classi cation algorithms are designed only for memory-resident data, thus limiting their suitability for data mining large data sets. This paper discusses issues in building a scalable classier and presents the design of SLIQ, a new classi er. SLIQ is a decision tree classi er that can handle both numeric and categorical attributes. It uses a novel pre-sorting technique in the tree-growth phase. This sorting procedure is integrated with a breadthrst tree growing strategy to enable classi cation of disk-resident datasets. SLIQ also uses a new tree-pruning algorithm that is inexpensive, and results in compact and accurate trees. The combination of these techniques enables SLIQ to scale for large data sets and classify data sets irrespective of the number of classes, attributes, and examples (records), thus making it an attractive tool for data mining. |
1f25ed3c9707684cc0cdf3e8321c791bc7164147 | Classification is an important data mining problem. Although classification is a wellstudied problem, most of the current classification algorithms require that all or a portion of the the entire dataset remain permanently in memory. This limits their suitability for mining over large databases. We present a new decision-tree-based classification algorithm, called SPRINT that removes all of the memory restrictions, and is fast and scalable. The algorithm has also been designed to be easily parallelized, allowing many processors to work together to build a single consistent model. This parallelization, also presented here, exhibits excellent scalability as well. The combination of these characteristics makes the proposed algorithm an ideal tool for data mining. |
7c3a4b84214561d8a6e4963bbb85a17a5b1e003a | |
76c87ec44fc5dc96bc445abe008deaf7c97c9373 | This paper presents a planar grid array antenna with a 100 Ω differential microstrip line feed on a single layer of standard soft substrate. The antenna operates in the 79 GHz frequency band for automotive radar applications. Its single row design offers a narrow beam in elevation and a wide beam in azimuth. Together with the differential microstrip line feeding, the antenna is suitable for differential multichannel MMICs in the frequency range. |
bc7308a97ec2d3f7985d48671abe7a8942a5b9f8 | This paper introduces an approach to sentiment analysis which uses support vector machines (SVMs) to bring together diverse sources of potentially pertinent information, including several favorability measures for phrases and adjectives and, where available, knowledge of the topic of the text. Models using the features introduced are further combined with unigram models which have been shown to be effective in the past (Pang et al., 2002) and lemmatized versions of the unigram models. Experiments on movie review data from Epinions.com demonstrate that hybrid SVMs which combine unigram-style feature-based SVMs with those based on real-valued favorability measures obtain superior performance, producing the best results yet published using this data. Further experiments using a feature set enriched with topic information on a smaller dataset of music reviews handannotated for topic are also reported, the results of which suggest that incorporating topic information into such models may also yield improvement. |
be389fb59c12c8c6ed813db13ab74841433ea1e3 | Fig. 1. We present iMapper, a method that reasons about the interactions of humans with objects, to recover both a plausible scene arrangement and human motions, that best explain an input monocular video (see inset). We fit characteristic interactions called scenelets (e.g., A, B, C) to the video and use them to reconstruct a plausible object arrangement and human motion path (left). The key challenge is that reliable fitting requires information about occlusions, which are unknown (i.e., latent). (Right) We show an overlay (from top-view) of our result over manually annotated groundtruth object placements. Note that object meshes are placed based on estimated object category, location, and size information. |
f24a1af3bd8873920593786d81590d29520cfebc | This letter presents the design and experiment of a novel elliptic filter based on the multilayered substrate integrated waveguide (MSIW) technique. A C-band elliptic filter with four folded MSIW cavities is simulated by using high frequency structure simulator software and fabricated with a two-layer printed circuit board process, the measured results show good performance and in agreement with the simulated results. |
8052bc5f9beb389b3144d423e7b5d6fcf5d0cc4f | Attributes are semantic visual properties shared by objects. They have been shown to improve object recognition and to enhance content-based image search. While attributes are expected to cover multiple categories, e.g. a dalmatian and a whale can both have "smooth skin", we find that the appearance of a single attribute varies quite a bit across categories. Thus, an attribute model learned on one category may not be usable on another category. We show how to adapt attribute models towards new categories. We ensure that positive transfer can occur between a source domain of categories and a novel target domain, by learning in a feature subspace found by feature selection where the data distributions of the domains are similar. We demonstrate that when data from the novel domain is limited, regularizing attribute models for that novel domain with models trained on an auxiliary domain (via Adaptive SVM) improves the accuracy of attribute prediction. |
01094798b20e96e1d029d6874577167f2214c7b6 | Fast concurrent hash tables are an increasingly important building block as we scale systems to greater numbers of cores and threads. This paper presents the design, implementation, and evaluation of a high-throughput and memory-efficient concurrent hash table that supports multiple readers and writers. The design arises from careful attention to systems-level optimizations such as minimizing critical section length and reducing interprocessor coherence traffic through algorithm re-engineering. As part of the architectural basis for this engineering, we include a discussion of our experience and results adopting Intel's recent hardware transactional memory (HTM) support to this critical building block. We find that naively allowing concurrent access using a coarse-grained lock on existing data structures reduces overall performance with more threads. While HTM mitigates this slowdown somewhat, it does not eliminate it. Algorithmic optimizations that benefit both HTM and designs for fine-grained locking are needed to achieve high performance.
Our performance results demonstrate that our new hash table design---based around optimistic cuckoo hashing---outperforms other optimized concurrent hash tables by up to 2.5x for write-heavy workloads, even while using substantially less memory for small key-value items. On a 16-core machine, our hash table executes almost 40 million insert and more than 70 million lookup operations per second. |
5685a394b25fcb27b6ad91f7325f2e60a9892e2a | Graph databases (GDB) have recently been arisen to overcome the limits of traditional databases for storing and managing data with graph-like structure. Today, they represent a requirementfor many applications that manage graph-like data,like social networks.Most of the techniques, applied to optimize queries in graph databases, have been used in traditional databases, distribution systems,... or they are inspired from graph theory. However, their reuse in graph databases should take care of the main characteristics of graph databases, such as dynamic structure, highly interconnected data, and ability to efficiently access data relationships. In this paper, we survey the query optimization techniques in graph databases. In particular,we focus on the features they have introduced to improve querying graph-like data. |
0541d5338adc48276b3b8cd3a141d799e2d40150 | MapReduce is a programming model and an associated implementation for processing and generating large datasets that is amenable to a broad variety of real-world tasks. Users specify the computation in terms of a map and a reduce function, and the underlying runtime system automatically parallelizes the computation across large-scale clusters of machines, handles machine failures, and schedules inter-machine communication to make efficient use of the network and disks. Programmers find the system easy to use: more than ten thousand distinct MapReduce programs have been implemented internally at Google over the past four years, and an average of one hundred thousand MapReduce jobs are executed on Google's clusters every day, processing a total of more than twenty petabytes of data per day. |
683c8f5c60916751bb23f159c86c1f2d4170e43f | |
3a116f2ae10a979c18787245933cb9f984569599 | Wireless sensor networks (WSNs) have emerged as an effective solution for a wide range of applications. Most of the traditional WSN architectures consist of static nodes which are densely deployed over a sensing area. Recently, several WSN architectures based on mobile elements (MEs) have been proposed. Most of them exploit mobility to address the problem of data collection in WSNs. In this article we first define WSNs with MEs and provide a comprehensive taxonomy of their architectures, based on the role of the MEs. Then we present an overview of the data collection process in such a scenario, and identify the corresponding issues and challenges. On the basis of these issues, we provide an extensive survey of the related literature. Finally, we compare the underlying approaches and solutions, with hints to open problems and future research directions. |
e7b50e3f56e21fd2a5eb34923d427a0bc6dd8905 | In this paper a new approach to the synthesis of coupling matrices for microwave filters is presente d. The new approach represents an advance on existing direct a nd optimization methods for coupling matrix synthesis in that it will exhaustively discover all possible coupling matrix solutions for a network if more than one exists. This enables a se lection to be made of the set of coupling values, resonator frequ ency offsets, parasitic coupling tolerance etc that will be best suited to the technology it is intended to realize the microwave filter with. To demonstrate the use of the method, the case of the r cently – introduced ‘extended box’ (EB) coupling matrix configuration is taken. The EB represents a new class of filter con figuration featuring a number of important advantages, one of which is the existence of multiple coupling matrix solutions for each prototype filtering function, eg 16 for 8 degree cases. This case is taken as an example to demonstrate the use of the synthesis method – yielding one solution suitable for dual-mode realiz ation and one where some couplings are small enough to neglect. Index Terms — Coupling matrix, filter synthesis, Groebner basis, inverted characteristic, multiple solutions. |
a6f1dfcc44277d4cfd8507284d994c9283dc3a2f | We have developed a near-real-time computer system that can locate and track a subject's head, and then recognize the person by comparing characteristics of the face to those of known individuals. The computational approach taken in this system is motivated by both physiology and information theory, as well as by the practical requirements of near-real-time performance and accuracy. Our approach treats the face recognition problem as an intrinsically two-dimensional (2-D) recognition problem rather than requiring recovery of three-dimensional geometry, taking advantage of the fact that faces are normally upright and thus may be described by a small set of 2-D characteristic views. The system functions by projecting face images onto a feature space that spans the significant variations among known face images. The significant features are known as "eigenfaces," because they are the eigenvectors (principal components) of the set of faces; they do not necessarily correspond to features such as eyes, ears, and noses. The projection operation characterizes an individual face by a weighted sum of the eigenface features, and so to recognize a particular face it is necessary only to compare these weights to those of known individuals. Some particular advantages of our approach are that it provides for the ability to learn and later recognize new faces in an unsupervised manner, and that it is easy to implement using a neural network architecture. |
b217788dd6d274ad391ee950e6f6a34033bd2fc7 | The multilayer perceptron, when trained as a classifier using backpropagation, is shown to approximate the Bayes optimal discriminant function. The result is demonstrated for both the two-class problem and multiple classes. It is shown that the outputs of the multilayer perceptron approximate the a posteriori probability functions of the classes being trained. The proof applies to any number of layers and any type of unit activation function, linear or nonlinear. |
647cb3825baecb6fab8b098166d5a446f7711f9b | In recent years, deep generative models have been shown to ‘imagine’ convincing high-dimensional observations such as images, audio, and even video, learning directly from raw data. In this work, we ask how to imagine goal-directed visual plans – a plausible sequence of observations that transition a dynamical system from its current configuration to a desired goal state, which can later be used as a reference trajectory for control. We focus on systems with high-dimensional observations, such as images, and propose an approach that naturally combines representation learning and planning. Our framework learns a generative model of sequential observations, where the generative process is induced by a transition in a low-dimensional planning model, and an additional noise. By maximizing the mutual information between the generated observations and the transition in the planning model, we obtain a low-dimensional representation that best explains the causal nature of the data. We structure the planning model to be compatible with efficient planning algorithms, and we propose several such models based on either discrete or continuous states. Finally, to generate a visual plan, we project the current and goal observations onto their respective states in the planning model, plan a trajectory, and then use the generative model to transform the trajectory to a sequence of observations. We demonstrate our method on imagining plausible visual plans of rope manipulation3. |
a63b97291149bfed416aa9e56a21314069540a7b | OBJECTIVE
To determine the empirical evidence for deficits in working memory (WM) processes in children and adolescents with attention-deficit/hyperactivity disorder (ADHD).
METHOD
Exploratory meta-analytic procedures were used to investigate whether children with ADHD exhibit WM impairments. Twenty-six empirical research studies published from 1997 to December, 2003 (subsequent to a previous review) met our inclusion criteria. WM measures were categorized according to both modality (verbal, spatial) and type of processing required (storage versus storage/manipulation).
RESULTS
Children with ADHD exhibited deficits in multiple components of WM that were independent of comorbidity with language learning disorders and weaknesses in general intellectual ability. Overall effect sizes for spatial storage (effect size = 0.85, CI = 0.62 - 1.08) and spatial central executive WM (effect size = 1.06, confidence interval = 0.72-1.39) were greater than those obtained for verbal storage (effect size = 0.47, confidence interval = 0.36-0.59) and verbal central executive WM (effect size = 0.43, confidence interval = 0.24-0.62).
CONCLUSION
Evidence of WM impairments in children with ADHD supports recent theoretical models implicating WM processes in ADHD. Future research is needed to more clearly delineate the nature, severity, and specificity of the impairments to ADHD. |
49e77b981a0813460e2da2760ff72c522ae49871 | Deep learning takes advantage of large datasets and computationally efficient training algorithms to outperform other approaches at various machine learning tasks. However, imperfections in the training phase of deep neural networks make them vulnerable to adversarial samples: inputs crafted by adversaries with the intent of causing deep neural networks to misclassify. In this work, we formalize the space of adversaries against deep neural networks (DNNs) and introduce a novel class of algorithms to craft adversarial samples based on a precise understanding of the mapping between inputs and outputs of DNNs. In an application to computer vision, we show that our algorithms can reliably produce samples correctly classified by human subjects but misclassified in specific targets by a DNN with a 97% adversarial success rate while only modifying on average 4.02% of the input features per sample. We then evaluate the vulnerability of different sample classes to adversarial perturbations by defining a hardness measure. Finally, we describe preliminary work outlining defenses against adversarial samples by defining a predictive measure of distance between a benign input and a target classification. |
3f52f57dcfdd1bb0514ff744f4fdaa986a325591 | There are several flaws in Apple's MacBook firmware security that allows untrusted modifications to be written to the SPI Flash boot ROM of these laptops. This capability represents a new class of persistent firmware rootkits, or 'bootkits', for the popular Apple MacBook product line. Stealthy bootkits can conceal themselves from detection and prevent software attempts to remove them. Malicious modifications to the boot ROM are able to survive re-installation of the operating system and even hard-drive replacement. Additionally, the malware can install a copy of itself onto other Thunderbolt devices' Option ROMs as a means to spread virally across air-gap security perimeters. Apple has fixed some of these flaws as part of CVE 2014-4498, but there is no easy solution to this class of vulnerability, since the MacBook lacks trusted hardware to perform cryptographic validation of the firmware at boot time. |
3b3acbf7cc2ec806e4177eac286a2ee22f6f7630 | This paper presents an over-110-GHz-bandwidth 2:1 analog multiplexer (AMUX) for ultra-broadband digital-to-analog (D/A) conversion subsystems. The AMUX was designed and fabricated by using newly developed $\pmb{0.25-\mu \mathrm{m}}$ -emitter-width InP double heterojunction bipolar transistors (DHBTs), which have a peak $\pmb{f_{\mathrm{T}}}$ and $\pmb{ f\displaystyle \max}$ of 460 and 480 GHz, respectively. The AMUX IC consists of lumped building blocks, including data-input linear buffers, a clock-input limiting buffer, an AMUX core, and an output linear buffer. The measured 3-dB bandwidth for data and clock paths are both over 110 GHz. In addition, it measures and obtains time-domain large-signal sampling operations of up to 180 GS/s. A 224-Gb/s (112-GBaud) four-level pulse-amplitude modulation (PAM4) signal was successfully generated by using this AMUX. To the best of our knowledge, this AMUX IC has the broadest bandwidth and the fastest sampling rate compared with any other previously reported AMUXes. |
4dd7721248c5489e25f46f7ab78c7d0229a596d4 | This paper introduces a fully integrated RF energy-harvesting system. The system can simultaneously deliver the current demanded by external dc loads and store the extra energy in external capacitors, during periods of extra output power. The design is fabricated in 0.18- $\mu \text{m}$ CMOS technology, and the active chip area is 1.08 mm2. The proposed self-startup system is reconfigurable with an integrated LC matching network, an RF rectifier, and a power management/controller unit, which consumes 66–157 nW. The required clock generation and the voltage reference circuit are integrated on the same chip. Duty cycle control is used to operate for the low input power that cannot provide the demanded output power. Moreover, the number of stages of the RF rectifier is reconfigurable to increase the efficiency of the available output power. For high available power, a secondary path is activated to charge an external energy storage element. The measured RF input power sensitivity is −14.8 dBm at a 1-V dc output. |
7314be5cd836c8f06bd1ecab565b00b65259eac6 | Surveying a suite of algorithms that offer a solution to managing large document archives. |
f0eace9bfe72c2449f76461ad97c4042d2a7141b | In this letter, a novel antenna-in-package (AiP) technology at W-band has been proposed. This technology is presented for solving the special case that the metallic package should be used to accommodate high mechanical strength. By taking advantages of the multilayer low temperature co-fired ceramic (LTCC) technology, the radiation efficiency of the antenna can be maintained. Meanwhile, high mechanical strength and shielding performance are achieved. A prototype of AiP has been designed. The prototype constitutes integrated LTCC antenna, low-loss feeder, and metallic package with a tapered horn aperture. This LTCC feeder is realized by laminated waveguide (LWG). An LWG cavity that is buried in LTCC is employed to broaden the antenna impedance bandwidth. Electromagnetic (EM) simulations and measurements of antenna performances agree well over the whole frequency range of interest. The proposed prototype achieves a -10-dB impedance bandwidth of 10 GHz from 88 to 98 GHz and a peak gain of 12.3 dBi at 89 GHz. |
2077d0f30507d51a0d3bbec4957d55e817d66a59 | We develop a framework for learning generic, expressive image priors that capture the statistics of natural scenes and can be used for a variety of machine vision tasks. The approach extends traditional Markov random field (MRF) models by learning potential functions over extended pixel neighborhoods. Field potentials are modeled using a Products-of-Experts framework that exploits nonlinear functions of many linear filter responses. In contrast to previous MRF approaches all parameters, including the linear filters themselves, are learned from training data. We demonstrate the capabilities of this Field of Experts model with two example applications, image denoising and image inpainting, which are implemented using a simple, approximate inference scheme. While the model is trained on a generic image database and is not tuned toward a specific application, we obtain results that compete with and even outperform specialized techniques. |
214658334c581f0d18b9a871928e91b6e4f83be7 | Cell balancing circuits are important to extent life-cycle of batteries and to extract maximum power from the batteries. A lot of power electronics topology has been tried for cell balancing in the battery packages. Active cell balancing topologies transfer energy from the cells showing higher performance to the cells showing lower performance to balance voltages across the cells of the battery using energy storage elements like combination of inductor-capacitor or transformer-capacitor or switched capacitor or switched inductor. In this study an active balancing topology without using any energy storage element is proposed. The idea is similar to the switched capacitor topology in which a capacitor or capacitor banks is switched across the cells of battery to balance the voltages. Since a basic battery cell model includes capacitance because of capacitive effect of the cell, this capacitive effect can be utilized in cell balancing. Hence the equalizer capacitors in switched capacitor topology can be eliminated and the cells of battery can be switched with each other. This allows faster energy transfer and hence results in quick equalization. The proposed topology removes the need of extra energy storage elements like capacitors which frequently fails in power electronic circuits, reduces the losses inserted by extra energy storage elements and cost and volume of the circuits and simplifies control algorithm. The proposed balancing circuit can be implemented according to the application requirement. The proposed topology is simulated in MATLAB/Simulink environment and showed better results in terms of balancing speed in comparison to switched capacitor topologies. |
0c04909ed933469246defcf9aca2b71ae8e3f623 | The major change in the second edition of this book is the addition of a new chapter on probabilistic retrieval. This chapter has been included because I think this is one of the most interesting and active areas of research in information retrieval. There are still many problems to be solved so I hope that this particular chapter will be of some help to those who want to advance the state of knowledge in this area. All the other chapters have been updated by including some of the more recent work on the topics covered. In preparing this new edition I have benefited from discussions with Bruce Croft, The material of this book is aimed at advanced undergraduate information (or computer) science students, postgraduate library science students, and research workers in the field of IR. Some of the chapters, particularly Chapter 6 * , make simple use of a little advanced mathematics. However, the necessary mathematical tools can be easily mastered from numerous mathematical texts that now exist and, in any case, references have been given where the mathematics occur. I had to face the problem of balancing clarity of exposition with density of references. I was tempted to give large numbers of references but was afraid they would have destroyed the continuity of the text. I have tried to steer a middle course and not compete with the Annual Review of Information Science and Technology. Normally one is encouraged to cite only works that have been published in some readily accessible form, such as a book or periodical. Unfortunately, much of the interesting work in IR is contained in technical reports and Ph.D. theses. For example, most the work done on the SMART system at Cornell is available only in reports. Luckily many of these are now available through the National Technical Information Service (U.S.) and University Microfilms (U.K.). I have not avoided using these sources although if the same material is accessible more readily in some other form I have given it preference. I should like to acknowledge my considerable debt to many people and institutions that have helped me. Let me say first that they are responsible for many of the ideas in this book but that only I wish to be held responsible. My greatest debt is to Karen Sparck Jones who taught me to research information retrieval as an experimental science. Nick Jardine and Robin … |
3cfbb77e5a0e24772cfdb2eb3d4f35dead54b118 | Context-predicting models (more commonly known as embeddings or neural language models) are the new kids on the distributional semantics block. Despite the buzz surrounding these models, the literature is still lacking a systematic comparison of the predictive models with classic, count-vector-based distributional semantic approaches. In this paper, we perform such an extensive evaluation, on a wide range of lexical semantics tasks and across many parameter settings. The results, to our own surprise, show that the buzz is fully justified, as the context-predicting models obtain a thorough and resounding victory against their count-based counterparts. |
9ec20b90593695e0f5a343dade71eace4a5145de | 1Student,Dept. of Computer Engineering, VESIT, Maharashtra, India ---------------------------------------------------------------------------***-------------------------------------------------------------------Abstract Deep Learning has come into existence as a new area for research in Machine Learning. It aims to act like a human brain, having the ability to learn and process from complex data and also tries solving intricate tasks as well. Due to this capability, its been used in various fields like text, sound, images etc. Natural language process has started to being impacted by the deep learning techniques. This research paper highlights Deep Learning’s recent developments and applications in Natural Language Processing. |
cc13fde0a91f4d618e6af66b49690702906316ae | Recent years have witness the development of cloud computing and the big data era, which brings up challenges to traditional decision tree algorithms. First, as the size of dataset becomes extremely big, the process of building a decision tree can be quite time consuming. Second, because the data cannot fit in memory any more, some computation must be moved to the external storage and therefore increases the I/O cost. To this end, we propose to implement a typical decision tree algorithm, C4.5, using MapReduce programming model. Specifically, we transform the traditional algorithm into a series of Map and Reduce procedures. Besides, we design some data structures to minimize the communication cost. We also conduct extensive experiments on a massive dataset. The results indicate that our algorithm exhibits both time efficiency and scalability. |
d73a71fa24b582accb934a9c2308567376ff396d | 3D geo-database research is a promising field to support challenging applications such as 3D urban planning, environmental monitoring, infrastructure management, and early warning or disaster management and response. In these fields, interdisciplinary research in GIScience and related fields is needed to support the modelling, analysis, management, and integration of large geo-referenced data sets, which describe human activities and geophysical phenomena. Geo-databases may serve as platforms to integrate 2D maps, 3D geo-scientific models, and other geo-referenced data. However, current geo-databases do not provide sufficient 3D data modelling and data handling techniques. New 3D geo-databases are needed to handle surface and volume models. This article first presents a 25-year retrospective of geo-database research. Data modelling, standards, and indexing of geo-data are discussed in detail. New directions for the development of 3D geo-databases to open new fields for interdisciplinary research are addressed. Two scenarios in the fields of early warning and emergency response demonstrate the combined management of human and geophysical phenomena. The article concludes with a critical outlook on open research problems. & 2011 Elsevier Ltd. All rights reserved. |
dce7a0550b4d63f6fe2e6908073ce0ce63626b0c | As we march down the road of automation in robotics and artificial intelligence, we will need to automate an increasing amount of ethical decision-making in order for our devices to operate independently from us. But automating ethical decision-making raises novel questions for engineers and designers, who will have to make decisions about how to accomplish that task. For example, some ethical decisionmaking involves hard moral cases, which in turn requires user input if we are to respect established norms surrounding autonomy and informed consent. The author considers this and other ethical considerations that accompany the automation of ethical decision-making. He proposes some general ethical requirements that should be taken into account in the design room, and sketches a design tool that can be integrated into the design process to help engineers, designers, ethicists, and policymakers decide how best to automate certain forms of ethical decision-making. |
ab19cbea5c61536b616cfa7654cf01bf0621b83f | |
102153467f27d43dd1db8a973846d3ac10ffdc3c | Healthcare is one of the most rapidly expanding application areas of the Internet of Things (IoT) technology. IoT devices can be used to enable remote health monitoring of patients with chronic diseases such as cardiovascular diseases (CVD). In this paper we develop an algorithm for ECG analysis and classification for heartbeat diagnosis, and implement it on an IoT-based embedded platform. This algorithm is our proposal for a wearable ECG diagnosis device, suitable for 24-hour continuous monitoring of the patient. We use Discrete Wavelet Transform (DWT) for the ECG analysis, and a Support Vector Machine (SVM) classifier. The best classification accuracy achieved is 98.9%, for a feature vector of size 18, and 2493 support vectors. Different implementations of the algorithm on the Galileo board, help demonstrate that the computational cost is such, that the ECG analysis and classification can be performed in real-time. |
44159c85dec6df7a257cbe697bfc854ecb1ebb0b | The newly inaugurated Research Resource for Complex Physiologic Signals, which was created under the auspices of the National Center for Research Resources of the National Institutes of Health, is intended to stimulate current research and new investigations in the study of cardiovascular and other complex biomedical signals. The resource has 3 interdependent components. PhysioBank is a large and growing archive of well-characterized digital recordings of physiological signals and related data for use by the biomedical research community. It currently includes databases of multiparameter cardiopulmonary, neural, and other biomedical signals from healthy subjects and from patients with a variety of conditions with major public health implications, including life-threatening arrhythmias, congestive heart failure, sleep apnea, neurological disorders, and aging. PhysioToolkit is a library of open-source software for physiological signal processing and analysis, the detection of physiologically significant events using both classic techniques and novel methods based on statistical physics and nonlinear dynamics, the interactive display and characterization of signals, the creation of new databases, the simulation of physiological and other signals, the quantitative evaluation and comparison of analysis methods, and the analysis of nonstationary processes. PhysioNet is an on-line forum for the dissemination and exchange of recorded biomedical signals and open-source software for analyzing them. It provides facilities for the cooperative analysis of data and the evaluation of proposed new algorithms. In addition to providing free electronic access to PhysioBank data and PhysioToolkit software via the World Wide Web (http://www.physionet. org), PhysioNet offers services and training via on-line tutorials to assist users with varying levels of expertise. |
a92eac4415719698d7d2097ef9564e7b36699010 | Purpose – To identify the applicability of social auditing as an approach of engaging stakeholders in assessing and reporting on corporate sustainability and its performance. Design/methodology/approach – Drawing upon the framework of AA1000 and the social auditing studies, this paper links stakeholder engagement, social auditing and corporate sustainability with a view to applying dialogue-based social auditing to address corporate sustainability. Findings – This paper identifies a “match” between corporate sustainability and social auditing, as both aim at improving the social, environmental and economic performance of an organisation, considering the well-being of a wider range of stakeholders and requiring the engagement of stakeholders in the process. This paper suggests that social auditing through engaging stakeholders via dialogue could be applied to build trusts, identify commitment and promote co-operation amongst stakeholders and corporations. Research limitations/implications – This research requires further empirical research into the practicality of social auditing in addressing corporate sustainability and the determination of the limitations of dialogue-based social auditing. Practical implications – Social auditing has been identified as a useful mechanism of balancing differing interests among stakeholders and corporations in a democratic business society. The application of social auditing in developing and achieving corporate sustainability has apparently practical implications. Originality/value – This paper examines the applicability of dialogue-based social auditing in helping business to move towards sustainability. Social auditing as a process of assessing and reporting on corporate social and environmental performance through engaging stakeholders via dialogue could be applied to build trusts, identify commitment and promote cooperation amongst stakeholders and corporations. |
915c4bb289b3642489e904c65a47fa56efb60658 | We consider image transformation problems, where an input image is transformed into an output image. Recent methods for such problems typically train feed-forward convolutional neural networks using a per-pixel loss between the output and ground-truth images. Parallel work has shown that high-quality images can be generated by defining and optimizing perceptual loss functions based on high-level features extracted from pretrained networks. We combine the benefits of both approaches, and propose the use of perceptual loss functions for training feed-forward networks for image transformation tasks. We show results on image style transfer, where a feed-forward network is trained to solve the optimization problem proposed by Gatys et al in real-time. Compared to the optimization-based method, our network gives similar qualitative results but is three orders of magnitude faster. We also experiment with single-image super-resolution, where replacing a per-pixel loss with a perceptual loss gives visually pleasing results. |
9201bf6f8222c2335913002e13fbac640fc0f4ec | |
929a376c6fea1376baf40fc2979cfbdd867f03ab | Lossy image compression methods always introduce various unpleasant artifacts into the compressed results, especially at low bit-rates. In recent years, many effective soft decoding methods for JPEG compressed images have been proposed. However, to the best of our knowledge, very few works have been done on soft decoding of JPEG 2000 compressed images. Inspired by the outstanding performance of Convolution Neural Network (CNN) in various computer vision tasks, we presents a soft decoding method for JPEG 2000 by using multiple bit-rate-driven deep CNNs. More specifically, in training stage, we train a series of deep CNNs using lots of high quality training images and the corresponding JPEG 2000 compressed images at different coding bit-rates. In testing stage, for an input compressed image, the CNN trained with the nearest coding bit-rate is selected to perform soft decoding. Extensive experiments demonstrate the effectiveness of the presented soft decoding framework, which greatly improves the visual quality and objective scores of JPEG 2000 compressed images. |
cfa092829c4c7a42ec77ab6844661e1dae082172 | Bitcoin has introduced a new concept that could feasibly revolutionise the entire Internet as it exists, and positively impact on many types of industries including, but not limited to, banking, public sector and supply chain. This innovation is grounded on pseudo-anonymity and strives on its innovative decentralised architecture based on the blockchain technology. Blockchain is pushing forward a race of transaction-based applications with trust establishment without the need for a centralised authority, promoting accountability and transparency within the business process. However, a blockchain ledger (e.g., Bitcoin) tend to become very complex and specialised tools, collectively called “Blockchain Analytics”, are required to allow individuals, law enforcement agencies and service providers to search, explore and visualise it. Over the last years, several analytical tools have been developed with capabilities that allow, e.g., to map relationships, examine flow of transactions and filter crime instances as a way to enhance forensic investigations. This paper discusses the current state of blockchain analytical tools and presents a thematic taxonomy model based on their applications. It also examines open challenges for future development and research. |
2e5fadbaab27af0c2b5cc6a3481c11b2b83c4f94 | We introduce the novel problem of identifying the photographer behind a photograph. To explore the feasibility of current computer vision techniques to address this problem, we created a new dataset of over 180,000 images taken by 41 well-known photographers. Using this dataset, we examined the effectiveness of a variety of features (low and high-level, including CNN features) at identifying the photographer. We also trained a new deep convolutional neural network for this task. Our results show that high-level features greatly outperform low-level features. We provide qualitative results using these learned models that give insight into our method's ability to distinguish between photographers, and allow us to draw interesting conclusions about what specific photographers shoot. We also demonstrate two applications of our method. |
25b6818743a6c0b9502a1c026c653038ff505c09 | |
6ed67a876b3afd2f2fb7b5b8c0800a0398c76603 | |
24281c886cd9339fe2fc5881faf5ed72b731a03e | MapReduce and its variants have been highly successful in implementing large-scale data-intensive applications on commodity clusters. However, most of these systems are built around an acyclic data flow model that is not suitable for other popular applications. This paper focuses on one such class of applications: those that reuse a working set of data across multiple parallel operations. This includes many iterative machine learning algorithms, as well as interactive data analysis tools. We propose a new framework called Spark that supports these applications while retaining the scalability and fault tolerance of MapReduce. To achieve these goals, Spark introduces an abstraction called resilient distributed datasets (RDDs). An RDD is a read-only collection of objects partitioned across a set of machines that can be rebuilt if a partition is lost. Spark can outperform Hadoop by 10x in iterative machine learning jobs, and can be used to interactively query a 39 GB dataset with sub-second response time. |
03ff3f8f4d5a700fbe8f3a3e63a39523c29bb60f | The ability to accurately represent sentences is central to language understanding. We describe a convolutional architecture dubbed the Dynamic Convolutional Neural Network (DCNN) that we adopt for the semantic modelling of sentences. The network uses Dynamic k-Max Pooling, a global pooling operation over linear sequences. The network handles input sentences of varying length and induces a feature graph over the sentence that is capable of explicitly capturing short and long-range relations. The network does not rely on a parse tree and is easily applicable to any language. We test the DCNN in four experiments: small scale binary and multi-class sentiment prediction, six-way question classification and Twitter sentiment prediction by distant supervision. The network achieves excellent performance in the first three tasks and a greater than 25% error reduction in the last task with respect to the strongest baseline. |
ff5c193fd7142b3f426baf997b43937eca1bbbad | Multilevel inverter technology has emerged recently as a very important alternative in the area of high-power medium-voltage energy control. This paper presents the most important topologies like diode-clamped inverter (neutral-point clamped), capacitor-clamped (flying capacitor), and cascaded multicell with separate dc sources. Emerging topologies like asymmetric hybrid cells and soft-switched multilevel inverters are also discussed. This paper also presents the most relevant control and modulation methods developed for this family of converters: multilevel sinusoidal pulsewidth modulation, multilevel selective harmonic elimination, and space-vector modulation. Special attention is dedicated to the latest and more relevant applications of these converters such as laminators, conveyor belts, and unified power-flow controllers. The need of an active front end at the input side for those inverters supplying regenerative loads is also discussed, and the circuit topology options are also presented. Finally, the peripherally developing areas such as high-voltage high-power devices and optical sensors and other opportunities for future development are addressed. |