text
stringlengths
62
2.94k
DiffFont Diffusion Model for Robust OneShot Font Generation ; Font generation is a difficult and timeconsuming task, especially in those languages using ideograms that have complicated structures with a large number of characters, such as Chinese. To solve this problem, fewshot font generation and even oneshot font generation have attracted a lot of attention. However, most existing font generation methods may still suffer from i large crossfont gap challenge; ii subtle crossfont variation problem; and iii incorrect generation of complicated characters. In this paper, we propose a novel oneshot font generation method based on a diffusion model, named DiffFont, which can be stably trained on large datasets. The proposed model aims to generate the entire font library by giving only one sample as the reference. Specifically, a large strokewise dataset is constructed, and a strokewise diffusion model is proposed to preserve the structure and the completion of each generated character. To our best knowledge, the proposed DiffFont is the first work that developed diffusion models to handle the font generation task. The welltrained DiffFont is not only robust to font gap and font variation, but also achieved promising performance on difficult character generation. Compared to previous font generation methods, our model reaches stateoftheart performance both qualitatively and quantitatively.
HumanLiff Layerwise 3D Human Generation with Diffusion Model ; 3D human generation from 2D images has achieved remarkable progress through the synergistic utilization of neural rendering and generative models. Existing 3D human generative models mainly generate a clothed 3D human as an undetectable 3D model in a single pass, while rarely considering the layerwise nature of a clothed human body, which often consists of the human body and various clothes such as underwear, outerwear, trousers, shoes, etc. In this work, we propose HumanLiff, the first layerwise 3D human generative model with a unified diffusion process. Specifically, HumanLiff firstly generates minimalclothed humans, represented by triplane features, in a canonical space, and then progressively generates clothes in a layerwise manner. In this way, the 3D human generation is thus formulated as a sequence of diffusionbased 3D conditional generation. To reconstruct more finegrained 3D humans with triplane representation, we propose a triplane shift operation that splits each triplane into three subplanes and shifts these subplanes to enable feature grid subdivision. To further enhance the controllability of 3D generation with 3D layered conditions, HumanLiff hierarchically fuses triplane features and 3D layered conditions to facilitate the 3D diffusion model learning. Extensive experiments on two layerwise 3D human datasets, SynBody synthetic and TightCap realworld, validate that HumanLiff significantly outperforms stateoftheart methods in layerwise 3D human generation. Our code will be available at httpsskhu101.github.ioHumanLiff.
Democracy versus Dictatorship in SelfOrganized Models of Financial Markets ; Models to mimic the transmission of information in financial markets are introduced. As an attempt to generate the demand process, we distinguish between dictatorship associations, where groups of agents rely on one of them to make decision, and democratic associations, where each agent takes part in the group decision. In the dictatorship model, agents segregate into two distinct populations, while the democratic model is driven towards a critical state where groups of agents of all sizes exist. Hence, both models display a level of organization, but only the democratic model is selforganized. We show that the dictatorship model generates less volatile markets than the democratic model.
On asymptotic models in Banach spaces ; A well known application of Ramsey's Theorem to Banach Space Theory is the notion of a spreading model e'i of a normalized basic sequence xi in a Banach space X. We show how to generalize the construction to define a new creature ei, which we call an asymptotic model of X. Every spreading model of X is an asymptotic model of X and in most settings, such as if X is reflexive, every normalized block basis of an asymptotic model is itself an asymptotic model. We also show how to use the HindmanMilliken Theorema strengthened form of Ramsey's Theoremto generate asymptotic models with a stronger form of convergence.
On Variational MicroMacro Models and their Application to Polycrystals ; Some variational micromacro models are briefly reviewed it is shown how, starting from the Taylor model and passing through the relaxed Taylor model, a consistent intermediate between Taylor's upper bound and the lower bound Sachs or rather static model was obtained. This intermediate or inhomogeneous variational model indeed, it generally predicts both strain and stress to be inhomogeneous could offer a general alternative to selfconsistent models. However, the implemented version was a rather empirical model ARMINJON 1984 with a less welldefined status. We present current progress in the implementation of the correct version.
Multispecies reactiondiffusion models admitting shock solutions ; A method for classifying nspecies reactiondiffusion models, admitting shock solutions is presented. The most general onedimensional twospecies reactiondiffusion model with nearest neighbor interactions admitting uniform product measures as the stationary states is studied. Satisfying more constraints, these models may experience singleshock solutions. These models are generalized to multispecies models. The twospecies models are studied in detail. Dynamical phase transitions of such models are also investigated.
Modelling Word Burstiness in Natural Language A Generalised Polya Process for Document Language Models in Information Retrieval ; We introduce a generalised multivariate Polya process for document language modelling. The framework outlined here generalises a number of statistical language models used in information retrieval for modelling document generation. In particular, we show that the choice of replacement matrix M ultimately defines the type of random process and therefore defines a particular type of document language model. We show that a particular variant of the general model is useful for modelling termspecific burstiness. Furthermore, via experimentation we show that this variant significantly improves retrieval effectiveness over a strong baseline on a number of small test collections.
U1T3R Extension of Standard Model A SubGeV Dark Matter Model ; We present a model based on a U1T3R extension of the Standard Model. The model addresses the mass hierarchy between the third generation and the first two generation fermions. U1T3R is spontaneously broken at sim 110 GeV. The model contains a subGeV dark matter candidate and two subGeV light scalar and vector mediators. The model explains the thermal dark matter abundance, measurements of the muon g2 and RKast anomalies. The model can be probed at the LHC, FASER, dark matter experiments and various beamdump based neutrino facilities, e.g., COHERENT, CCM, MicroBooNE, SBND, ICARUS, DUNE etc.
Let the Models Respond Interpreting Language Model Detoxification Through the Lens of Prompt Dependence ; Due to language models' propensity to generate toxic or hateful responses, several techniques were developed to align model generations with users' preferences. Despite the effectiveness of such methods in improving the safety of model interactions, their impact on models' internal processes is still poorly understood. In this work, we apply popular detoxification approaches to several language models and quantify their impact on the resulting models' prompt dependence using feature attribution methods. We evaluate the effectiveness of counternarrative finetuning and compare it with reinforcement learningdriven detoxification, observing differences in prompt reliance between the two methods despite their similar detoxification performances.
BAGM A Backdoor Attack for Manipulating TexttoImage Generative Models ; The rise in popularity of texttoimage generative artificial intelligence AI has attracted widespread public interest. We demonstrate that this technology can be attacked to generate content that subtly manipulates its users. We propose a Backdoor Attack on texttoimage Generative Models BAGM, which upon triggering, infuses the generated images with manipulative details that are naturally blended in the content. Our attack is the first to target three popular texttoimage generative models across three stages of the generative process by modifying the behaviour of the embedded tokenizer, the language model or the image generative model. Based on the penetration level, BAGM takes the form of a suite of attacks that are referred to as surface, shallow and deep attacks in this article. Given the existing gap within this domain, we also contribute a comprehensive set of quantitative metrics designed specifically for assessing the effectiveness of backdoor attacks on texttoimage models. The efficacy of BAGM is established by attacking stateoftheart generative models, using a marketing scenario as the target domain. To that end, we contribute a dataset of branded product images. Our embedded backdoors increase the bias towards the target outputs by more than five times the usual, without compromising the model robustness or the generated content utility. By exposing generative AI's vulnerabilities, we encourage researchers to tackle these challenges and practitioners to exercise caution when using pretrained models. Relevant code, input prompts and supplementary material can be found at httpsgithub.comJJViceBAGM, and the dataset is available at httpsieeedataport.orgdocumentsmarketablefoodsmfdataset. Keywords Generative Artificial Intelligence, Generative Models, TexttoImage generation, Backdoor Attacks, Trojan, Stable Diffusion.
Fractal growth of tumors and other cellular populations Linking the mechanistic to the phenomenological modeling and vice versa ; In this paper we study and extend the mechanistic mean field theory of growth of cellular populations proposed by Mombach et al in Mombach J. C. M. et al., Europhysics Letter, 59 2002 923 MLBI model, and we demonstrate that the original model and our generalizations lead to inferences of biological interest. In the first part of this paper, we show that the model in study is widely general since it admits, as particular cases, the main phenomenological models of cellular growth. In the second part of this work, we generalize the emphMLBI model to a wider family of models by allowing the cells to have a generic unspecified biologically plausible interaction. Then, we derive a relationship between this generic microscopic interaction function and the growth rate of the corresponding macroscopic model. Finally, we propose to use this relationship in order to help the investigation of the biological plausibility of phenomenological models of cancer growth.
Polite Dialogue Generation Without Parallel Data ; Stylistic dialogue response generation, with valuable applications in personalitybased conversational agents, is a challenging task because the response needs to be fluent, contextuallyrelevant, as well as paralinguistically accurate. Moreover, parallel datasets for regulartostylistic pairs are usually unavailable. We present three weaklysupervised models that can generate diverse polite or rude dialogue responses without parallel data. Our late fusion model Fusion merges the decoder of an encoderattentiondecoder dialogue model with a language model trained on standalone polite utterances. Our labelfinetuning LFT model prepends to each source sequence a politenessscore scaled label predicted by our stateoftheart politeness classifier during training, and at test time is able to generate polite, neutral, and rude responses by simply scaling the label embedding by the corresponding score. Our reinforcement learning model PoliteRL encourages politeness generation by assigning rewards proportional to the politeness classifier score of the sampled response. We also present two retrievalbased polite dialogue model baselines. Human evaluation validates that while the Fusion and the retrievalbased models achieve politeness with poorer contextrelevance, the LFT and PoliteRL models can produce significantly more polite responses without sacrificing dialogue quality.
Latent Topic Conversational Models ; Latent variable models have been a preferred choice in conversational modeling compared to sequencetosequence seq2seq models which tend to generate generic and repetitive responses. Despite so, training latent variable models remains to be difficult. In this paper, we propose Latent Topic Conversational Model LTCM which augments seq2seq with a neural latent topic component to better guide response generation and make training easier. The neural topic component encodes information from the source sentence to build a global topic distribution over words, which is then consulted by the seq2seq model at each generation step. We study in details how the latent representation is learnt in both the vanilla model and LTCM. Our extensive experiments contribute to better understanding and training of conditional latent models for languages. Our results show that by sampling from the learnt latent representations, LTCM can generate diverse and interesting responses. In a subjective human evaluation, the judges also confirm that LTCM is the overall preferred option.
Maximum entropy models capture melodic styles ; We introduce a Maximum Entropy model able to capture the statistics of melodies in music. The model can be used to generate new melodies that emulate the style of the musical corpus which was used to train it. Instead of using the nbody interactions of n1order Markov models, traditionally used in automatic music generation, we use a knearest neighbour model with pairwise interactions only. In that way, we keep the number of parameters low and avoid overfitting problems typical of Markov models. We show that longrange musical phrases don't need to be explicitly enforced using highorder Markov interactions, but can instead emerge from multiple, competing, pairwise interactions. We validate our Maximum Entropy model by contrasting how much the generated sequences capture the style of the original corpus without plagiarizing it. To this end we use a datacompression approach to discriminate the levels of borrowing and innovation featured by the artificial sequences. The results show that our modelling scheme outperforms both fixedorder and variableorder Markov models. This shows that, despite being based only on pairwise interactions, this Maximum Entropy scheme opens the possibility to generate musically sensible alterations of the original phrases, providing a way to generate innovation.
Socratic Learning Augmenting Generative Models to Incorporate Latent Subsets in Training Data ; A challenge in training discriminative models like neural networks is obtaining enough labeled training data. Recent approaches use generative models to combine weak supervision sources, like userdefined heuristics or knowledge bases, to label training data. Prior work has explored learning accuracies for these sources even without ground truth labels, but they assume that a single accuracy parameter is sufficient to model the behavior of these sources over the entire training set. In particular, they fail to model latent subsets in the training data in which the supervision sources perform differently than on average. We present Socratic learning, a paradigm that uses feedback from a corresponding discriminative model to automatically identify these subsets and augments the structure of the generative model accordingly. Experimentally, we show that without any ground truth labels, the augmented generative model reduces error by up to 56.06 for a relation extraction task compared to a stateoftheart weak supervision technique that utilizes generative models.
SHAPED SharedPrivate EncoderDecoder for Text Style Adaptation ; Supervised training of abstractive language generation models results in learning conditional probabilities over language sequences based on the supervised training signal. When the training signal contains a variety of writing styles, such models may end up learning an 'average' style that is directly influenced by the training data makeup and cannot be controlled by the needs of an application. We describe a family of model architectures capable of capturing both generic language characteristics via shared model parameters, as well as particular style characteristics via private model parameters. Such models are able to generate language according to a specific learned style, while still taking advantage of their power to model generic language phenomena. Furthermore, we describe an extension that uses a mixture of output distributions from all learned styles to perform onthe fly style adaptation based on the textual input alone. Experimentally, we find that the proposed models consistently outperform models that encapsulate singlestyle or averagestyle language generation capabilities.
Least Angle Regression in Tangent Space and LASSO for Generalized Linear Models ; This study proposes sparse estimation methods for the generalized linear models, which run one of least angle regression LARS and least absolute shrinkage and selection operator LASSO in the tangent space of the manifold of the statistical model. This study approximates the statistical model and subsequently uses exact calculations. LARS was proposed as an efficient algorithm for parameter estimation and variable selection for the normal linear model. The LARS algorithm is described in terms of Euclidean geometry regarding the correlation as the metric of the parameter space. Since the LARS algorithm only works in Euclidean space, we transform a manifold of the statistical model into the tangent space at the origin. In the generalized linear regression, this transformation allows us to run the original LARS algorithm for the generalized linear models. The proposed methods are efficient and perform well. Realdata analysis indicates that the proposed methods output similar results to that of the l1regularized maximum likelihood estimation for the aforementioned models. Numerical experiments reveal that our methods work well and they may be better than the l1regularization in generalization, parameter estimation, and model selection.
Endogenous Stochastic Arbitrage Bubbles and the BlackScholes model ; This paper develops a model that incorporates the presence of stochastic arbitrage explicitly in the BlackScholes equation. Here, the arbitrage is generated by a stochastic bubble, which generalizes the deterministic arbitrage model obtained in the literature. It is considered to be a generic stochastic dynamic for the arbitrage bubble, and a generalized BlackScholes equation is then derived. The resulting equation is similar to that of the stochastic volatility models, but there are no undetermined parameters as the market price of risk. The proposed theory has asymptotic behaviors that are associated with the weak and strong arbitrage bubble limits. For the case where the arbitrage bubble's volatility is zero deterministic bubble, the weak limit corresponds to the usual BlackScholes model. The strong limit case also give a BlackScholes model, but the underlying asset's mean value replaces the interest rate. When the bubble is stochastic, the theory also has weak and strong asymptotic limits that give rise to option price dynamics that are similar to the BlackScholes model. Explicit formulas are derived for Gaussian and lognormal stochastic bubbles. Consequently, the BlackScholes model can be considered to be a low energy limit of a more general stochastic model.
Imagebased model parameter optimization using ModelAssisted Generative Adversarial Networks ; We propose and demonstrate the use of a modelassisted generative adversarial network GAN to produce fake images that accurately match true images through the variation of the parameters of the model that describes the features of the images. The generator learns the model parameter values that produce fake images that best match the true images. Two case studies show excellent agreement between the generated best match parameters and the true parameters. The best match model parameter values can be used to retune the default simulation to minimize any bias when applying image recognition techniques to fake and true images. In the case of a realworld experiment, the true images are experimental data with unknown true model parameter values, and the fake images are produced by a simulation that takes the model parameters as input. The modelassisted GAN uses a convolutional neural network to emulate the simulation for all parameter values that, when trained, can be used as a conditional generator for fast fakeimage production.
Relatively complicated Using models to teach general relativity at different levels ; This review presents an overview of various kinds of models physical, abstract, mathematical, visual that can be used to present the concepts and applications of Einstein's general theory of relativity at the level of undergraduate and even highschool teaching. After a general introduction dealing with various kinds of models and their properties, specific areas of general relativity are addressed the elastic sheet model and other models for the fundamental geometric properties of gravity, models for black holes including the river model, cosmological models for an expanding universe, and models for gravitational waves as well as for interferometric gravitational wave detectors.
Learning NonConvergent NonPersistent ShortRun MCMC Toward EnergyBased Model ; This paper studies a curious phenomenon in learning energybased model EBM using MCMC. In each learning iteration, we generate synthesized examples by running a nonconvergent, nonmixing, and nonpersistent shortrun MCMC toward the current model, always starting from the same initial distribution such as uniform noise distribution, and always running a fixed number of MCMC steps. After generating synthesized examples, we then update the model parameters according to the maximum likelihood learning gradient, as if the synthesized examples are fair samples from the current model. We treat this nonconvergent shortrun MCMC as a learned generator model or a flow model. We provide arguments for treating the learned nonconvergent shortrun MCMC as a valid model. We show that the learned shortrun MCMC is capable of generating realistic images. More interestingly, unlike traditional EBM or MCMC, the learned shortrun MCMC is capable of reconstructing observed images and interpolating between images, like generator or flow models. The code can be found in the Appendix.
General Ftheory models with tuned operatornameSU3 times operatornameSU2 times operatornameU1 mathbbZ6 symmetry ; We construct a general form for an Ftheory Weierstrass model over a general base giving a 6D or 4D supergravity theory with gauge group operatornameSU3 times operatornameSU2 times operatornameU1 mathbbZ6 and generic associated matter, which includes the matter content of the standard model. The Weierstrass model is identified by unHiggsing a model with operatornameU1 gauge symmetry and charges q le 4 previously found by the first author. This model includes two distinct branches that were identified in earlier work, and includes as a special case the class of models recently studied by Cvetivc, Halverson, Lin, Liu, and Tian, for which we demonstrate explicitly the possibility of unification through an operatornameSU5 unHiggsing. We develop a systematic methodology for checking that a parameterized class of Ftheory Weierstrass models with a given gauge group G and fixed matter content is generic contains all allowed moduli and confirm that this holds for the models constructed here.
A Systematic Assessment of Syntactic Generalization in Neural Language Models ; While stateoftheart neural network models continue to achieve lower perplexity scores on language modeling benchmarks, it remains unknown whether optimizing for broadcoverage predictive performance leads to humanlike syntactic knowledge. Furthermore, existing work has not provided a clear picture about the model properties required to produce proper syntactic generalizations. We present a systematic evaluation of the syntactic knowledge of neural language models, testing 20 combinations of model types and data sizes on a set of 34 Englishlanguage syntactic test suites. We find substantial differences in syntactic generalization performance by model architecture, with sequential models underperforming other architectures. Factorially manipulating model architecture and training dataset size 1M40M words, we find that variability in syntactic generalization performance is substantially greater by architecture than by dataset size for the corpora tested in our experiments. Our results also reveal a dissociation between perplexity and syntactic generalization performance.
A Multiattribute Controllable Generative Model for Histopathology Image Synthesis ; Generative models have been applied in the medical imaging domain for various image recognition and synthesis tasks. However, a more controllable and interpretable image synthesis model is still lacking yet necessary for important applications such as assisting in medical training. In this work, we leverage the efficient selfattention and contrastive learning modules and build upon stateoftheart generative adversarial networks GANs to achieve an attributeaware image synthesis model, termed AttributeGAN, which can generate highquality histopathology images based on multiattribute inputs. In comparison to existing singleattribute conditional generative models, our proposed model better reflects input attributes and enables smoother interpolation among attribute values. We conduct experiments on a histopathology dataset containing stained HE images of urothelial carcinoma and demonstrate the effectiveness of our proposed model via comprehensive quantitative and qualitative comparisons with stateoftheart models as well as different variants of our model. Code is available at httpsgithub.comkarenyyyMICCAI2021AttributeGAN.
Image SuperResolution With Deep Variational Autoencoders ; Image superresolution SR techniques are used to generate a highresolution image from a lowresolution image. Until now, deep generative models such as autoregressive models and Generative Adversarial Networks GANs have proven to be effective at modelling highresolution images. VAEbased models have often been criticised for their feeble generative performance, but with new advancements such as VDVAE, there is now strong evidence that deep VAEs have the potential to outperform current stateoftheart models for highresolution image generation. In this paper, we introduce VDVAESR, a new model that aims to exploit the most recent deep VAE methodologies to improve upon the results of similar models. VDVAESR tackles image superresolution using transfer learning on pretrained VDVAEs. The presented model is competitive with other stateoftheart models, having comparable results on image quality metrics.
Timeseries Transformer Generative Adversarial Networks ; Many realworld tasks are plagued by limitations on data in some instances very little data is available and in others, data is protected by privacy enforcing regulations e.g. GDPR. We consider limitations posed specifically on timeseries data and present a model that can generate synthetic timeseries which can be used in place of real data. A model that generates synthetic timeseries data has two objectives 1 to capture the stepwise conditional distribution of real sequences, and 2 to faithfully model the joint distribution of entire real sequences. Autoregressive models trained via maximum likelihood estimation can be used in a system where previous predictions are fed back in and used to predict future ones; in such models, errors can accrue over time. Furthermore, a plausible initial value is required making MLE based models not really generative. Many downstream tasks learn to model conditional distributions of the timeseries, hence, synthetic data drawn from a generative model must satisfy 1 in addition to performing 2. We present TsTGAN, a framework that capitalises on the Transformer architecture to satisfy the desiderata and compare its performance against five stateoftheart models on five datasets and show that TsTGAN achieves higher predictive performance on all datasets.
Text Generation with TextEditing Models ; Textediting models have recently become a prominent alternative to seq2seq models for monolingual textgeneration tasks such as grammatical error correction, simplification, and style transfer. These tasks share a common trait they exhibit a large amount of textual overlap between the source and target texts. Textediting models take advantage of this observation and learn to generate the output by predicting edit operations applied to the source sequence. In contrast, seq2seq models generate outputs wordbyword from scratch thus making them slow at inference time. Textediting models provide several benefits over seq2seq models including faster inference speed, higher sample efficiency, and better control and interpretability of the outputs. This tutorial provides a comprehensive overview of textediting models and current stateoftheart approaches, and analyzes their pros and cons. We discuss challenges related to productionization and how these models can be used to mitigate hallucination and bias, both pressing challenges in the field of text generation.
Your Autoregressive Generative Model Can be Better If You Treat It as an EnergyBased One ; Autoregressive generative models are commonly used, especially for those tasks involving sequential data. They have, however, been plagued by a slew of inherent flaws due to the intrinsic characteristics of chainstyle conditional modeling e.g., exposure bias or lack of longrange coherence, severely limiting their ability to model distributions properly. In this paper, we propose a unique method termed EARM for training autoregressive generative models that takes advantage of a welldesigned energybased learning objective. By leveraging the extra degree of freedom of the softmax operation, we are allowed to make the autoregressive model itself be an energybased model for measuring the likelihood of input without introducing any extra parameters. Furthermore, we show that EARM can be trained efficiently and is capable of alleviating the exposure bias problem and increase temporal coherence for autoregressive generative models. Extensive empirical results, covering benchmarks like language modeling, neural machine translation, and image generation, demonstrate the effectiveness of the proposed approach.
Cold Diffusion Inverting Arbitrary Image Transforms Without Noise ; Standard diffusion models involve an image transform adding Gaussian noise and an image restoration operator that inverts this degradation. We observe that the generative behavior of diffusion models is not strongly dependent on the choice of image degradation, and in fact an entire family of generative models can be constructed by varying this choice. Even when using completely deterministic degradations e.g., blur, masking, and more, the training and testtime update rules that underlie diffusion models can be easily generalized to create generative models. The success of these fully deterministic models calls into question the community's understanding of diffusion models, which relies on noise in either gradient Langevin dynamics or variational inference, and paves the way for generalized diffusion models that invert arbitrary processes. Our code is available at httpsgithub.comarpitbansal297ColdDiffusionModels
Audiovisual speech enhancement with a deep Kalman filter generative model ; Deep latent variable generative models based on variational autoencoder VAE have shown promising performance for audiovisual speech enhancement AVSE. The underlying idea is to learn a VAEbased audiovisual prior distribution for clean speech data, and then combine it with a statistical noise model to recover a speech signal from a noisy audio recording and video lip images of the target speaker. Existing generative models developed for AVSE do not take into account the sequential nature of speech data, which prevents them from fully incorporating the power of visual data. In this paper, we present an audiovisual deep Kalman filter AVDKF generative model which assumes a firstorder Markov chain model for the latent variables and effectively fuses audiovisual data. Moreover, we develop an efficient inference methodology to estimate speech signals at test time. We conduct a set of experiments to compare different variants of generative models for speech enhancement. The results demonstrate the superiority of the AVDKF model compared with both its audioonly version and the nonsequential audioonly and audiovisual VAEbased models.
The Benefits of Bad Advice Autocontrastive Decoding across Model Layers ; Applying language models to natural language processing tasks typically relies on the representations in the final model layer, as intermediate hidden layer representations are presumed to be less informative. In this work, we argue that due to the gradual improvement across model layers, additional information can be gleaned from the contrast between higher and lower layers during inference. Specifically, in choosing between the probable next token predictions of a generative model, the predictions of lower layers can be used to highlight which candidates are best avoided. We propose a novel approach that utilizes the contrast between layers to improve text generation outputs, and show that it mitigates degenerative behaviors of the model in openended generation, significantly improving the quality of generated texts. Furthermore, our results indicate that contrasting between model layers at inference time can yield substantial benefits to certain aspects of general language model capabilities, more effectively extracting knowledge during inference from a given set of model parameters.
Assessing the efficacy of large language models in generating accurate teacher responses ; Tack et al., 2023 organized the shared task hosted by the 18th Workshop on Innovative Use of NLP for Building Educational Applications on generation of teacher language in educational dialogues. Following the structure of the shared task, in this study, we attempt to assess the generative abilities of large language models in providing informative and helpful insights to students, thereby simulating the role of a knowledgeable teacher. To this end, we present an extensive evaluation of several benchmarking generative models, including GPT4 fewshot, incontext learning, finetuned GPT2, and finetuned DialoGPT. Additionally, to optimize for pedagogical quality, we finetuned the FlanT5 model using reinforcement learning. Our experimental findings on the TeacherStudent Chatroom Corpus subset indicate the efficacy of GPT4 over other finetuned models, measured using BERTScore and DialogRPT. We hypothesize that several dataset characteristics, including sampling, representativeness, and dialog completeness, pose significant challenges to finetuning, thus contributing to the poor generalizability of the finetuned models. Finally, we note the need for these generative models to be evaluated with a metric that relies not only on dialog coherence and matched language modeling distribution but also on the model's ability to showcase pedagogical skills.
Learning Evaluation Models from Large Language Models for Sequence Generation ; Large language models achieve stateoftheart performance on sequence generation evaluation, but typically have a large number of parameters. This is a computational challenge as presented by applying their evaluation capability at scale. To overcome the challenge, in this paper, we propose textbfECT, an textbfevaluation textbfcapability textbftransfer method, to transfer the evaluation capability from LLMs to relatively lightweight language models. Based on the proposed ECT, we learn various evaluation models from ChatGPT, and employ them as reward models to improve sequence generation models via reinforcement learning and reranking approaches. Experimental results on machine translation, text style transfer, and summarization tasks demonstrate the effectiveness of our ECT. Notably, applying the learned evaluation models to sequence generation models results in better generated sequences as evaluated by commonly used metrics and ChatGPT.
Improving Generative Modelbased Unfolding with Schrodinger Bridges ; Machine learningbased unfolding has enabled unbinned and highdimensional differential cross section measurements. Two main approaches have emerged in this research area one based on discriminative models and one based on generative models. The main advantage of discriminative models is that they learn a small correction to a starting simulation while generative models scale better to regions of phase space with little data. We propose to use Schroedinger Bridges and diffusion models to create SBUnfold, an unfolding approach that combines the strengths of both discriminative and generative models. The key feature of SBUnfold is that its generative model maps one set of events into another without having to go through a known probability density as is the case for normalizing flows and standard diffusion models. We show that SBUnfold achieves excellent performance compared to state of the art methods on a synthetic Zjets dataset.
Compatibility of the expansive nondecelerative universe model with the Newton gravitational theory and the general theory of relativity ; Applying the Vaidya metrics in the model of Expansive Nondecelerative Universe ENU leads to compatibility of the ENU model both with the classic Newton gravitational theory and the general theory of relativity in weak fields
Inhomogeneous Cosmological Models with Flat Slices Generated from the Einsteinde Sitter Universe ; A family of cosmological models is considered which in a certain synchronized system of reference possess flat slices t const. They are generated from the Einsteinde Sitter universe by a suitable transformation. Under physically reasonable presumptions these transformed models fulfil certain energy conditions.
Generalized XYZ Model Associated to Sklyanin Algebra ; The free energy of a lattice model, which is a generalization of the Heisenberg XYZ model with the higher spin representation of the Sklyanin algebra, is calculated by the generalized Bethe Ansatz of Takhtajan and Faddeev. Talk given at the XXI Differential Geometry Methods in Theoretical Physics, Tianjin, China 59 June 1992
A Generalization of the Submodel of Nonlinear CP1 Models ; We generalize the submodel of nonlinear CP1 models. The generalized models include higher order derivatives. For the systems of higher order equations, we construct a Backlundlike transformation of solutions and an infinite number of conserved currents by using the Bell polynomials.
The multihistory approach to the timetravel paradoxes of General Relativity mathematical analysis of a toy model ; With a mathematical eye to Matt Visser's multihistory approach to the timetravelparadoxes of General Relativity, a non relativistic toy model is analyzed in order of characterizing the conditions in which, in such a toy model, causation occurs.
Generalized selfdual ChernSimons vortices ; We search for vortices in a generalized Abelian ChernSimons model with a nonstandard kinetic term. We illustrate our results, plotting and comparing several features of the vortex solution of the generalized model with those of the vortex solution found in the standard ChernSimons model.
Sufficient FTP Schedulability Test for the NonCyclic Generalized Multiframe Task Model ; Our goal is to provide a sufficient schedulability test ideally polynomial for the scheduling of NonCyclic Generalized Multiframe Task Model using FixedTaskPriority schedulers. We report two first results i we present and prove correct the critical instant for the NonCyclic Generalized Multiframe Task Model then ii we propose an algorithm which provides a sufficient but pseudopolynomial schedulability test.
Fourth Generations with an Inert Doublet Higgs ; We explore an extension of the fourth generation model with multiHiggs doublets and three fermion singlets. The Standard Model neutrinos acquire mass radiatively at one loop level while the fourth generation neutrinos acquire a heavy treelevel mass. The model also contains several Dark Matter candidate whose stability is guaranteed by a Z2 discrete symmetry. The possibility of CP violation in the scalar sector is also briefly discussed.
Singlefield attractors ; I describe a simple class of alphaattractors, generalizing the singlefield GL model of inflation in supergravity. The new class of models is defined for 0alpha lesssim 1, providing a good match to the present cosmological data. I also present a generalized version of these models which can describe not only inflation but also dark energy and supersymmetry breaking.
On generalized ARCH model with stationary liquidity ; We study a generalized ARCH model with liquidity given by a general stationary process. We provide minimal assumptions that ensure the existence and uniqueness of the stationary solution. In addition, we provide consistent estimators for the model parameters by using AR1 type characterisation. We illustrate our results with several examples and simulation studies.
Iterative Descent Method for Generalized Leontief Model ; In this paper we consider generalized Leontief model. We show that under certain condition the generalized Leontief model is solvable by iterative descent method based on infeasible interior point algorithm. We prove the convergence of the method from strictly positive starting point. A numerical example is presented to demonstrate the performance of the algorithm
Frequency vs. Association for Constraint Selection in UsageBased Construction Grammar ; A usagebased Construction Grammar CxG posits that slotconstraints generalize from common exemplar constructions. But what is the best model of constraint generalization This paper evaluates competing frequencybased and associationbased models across eight languages using a metric derived from the Minimum Description Length paradigm. The experiments show that associationbased models produce better generalizations across all languages by a significant margin.
Adversarial Attack with Pattern Replacement ; We propose a generative model for adversarial attack. The model generates subtle but predictive patterns from the input. To perform an attack, it replaces the patterns of the input with those generated based on examples from some other class. We demonstrate our model by attacking CNN on MNIST.
Multitransition solutions for a generalized FrenkelKontorova model ; We study a generalized FrenkelKontorova model. Using minimal and Birkhoff solutions as building blocks, we construct a lot of homoclinic solutions and heteroclinic solutions for this generalized FrenkelKontorova model under gap conditions. These new solutions are not minimal and Birkhoff any more. We use constrained minimization method to prove our results.
Elementary functions solutions to the Bachelier model generated by Lie point symmetries ; Under the recent negative interest rate situation, the Bachelier model has been attracting attention and adopted for evaluating the price of interest rate options. In this paper we find the Lie point symmetries of the Bachelier partial differential equation PDE and use them in order to generate new classes of denumerably infinite elementary function solutions to the Bachelier model from elementary function solutions to it which we derived in a previous publication.
A timesymmetric generalization of quantum mechanics ; I propose a timesymmetric generalization of quantum mechanics that is inspired by scattering theory. The model postulates two interacting quantum states, one traveling forward in time and one backward in time. The interaction is modeled by a unitary scattering operator. I show that this model is equivalent to pseudounitary quantum mechanics.
UPainting Unified TexttoImage Diffusion Generation with Crossmodal Guidance ; Diffusion generative models have recently greatly improved the power of textconditioned image generation. Existing image generation models mainly include text conditional diffusion model and crossmodal guided diffusion model, which are good at small scene image generation and complex scene image generation respectively. In this work, we propose a simple yet effective approach, namely UPainting, to unify simple and complex scene image generation, as shown in Figure 1. Based on architecture improvements and diverse guidance schedules, UPainting effectively integrates crossmodal guidance from a pretrained imagetext matching model into a text conditional diffusion model that utilizes a pretrained Transformer language model as the text encoder. Our key findings is that combining the power of largescale Transformer language model in understanding language and imagetext matching model in capturing crossmodal semantics and style, is effective to improve sample fidelity and imagetext alignment of image generation. In this way, UPainting has a more general image generation capability, which can generate images of both simple and complex scenes more effectively. To comprehensively compare texttoimage models, we further create a more general benchmark, UniBench, with wellwritten Chinese and English prompts in both simple and complex scenes. We compare UPainting with recent models and find that UPainting greatly outperforms other models in terms of caption similarity and image fidelity in both simple and complex scenes. UPainting project page urlhttpsupainting.github.io.
The ExtractiveAbstractive Axis Measuring Content Borrowing in Generative Language Models ; Generative language models produce highly abstractive outputs by design, in contrast to extractive responses in search engines. Given this characteristic of LLMs and the resulting implications for content Licensing Attribution, we propose the the socalled ExtractiveAbstractive axis for benchmarking generative models and highlight the need for developing corresponding metrics, datasets and annotation guidelines. We limit our discussion to the text modality.
RenAIssance A Survey into AI TexttoImage Generation in the Era of Large Model ; Texttoimage generation TTI refers to the usage of models that could process text input and generate high fidelity images based on text descriptions. Texttoimage generation using neural networks could be traced back to the emergence of Generative Adversial Network GAN, followed by the autoregressive Transformer. Diffusion models are one prominent type of generative model used for the generation of images through the systematic introduction of noises with repeating steps. As an effect of the impressive results of diffusion models on image synthesis, it has been cemented as the major image decoder used by texttoimage models and brought texttoimage generation to the forefront of machinelearning ML research. In the era of large models, scaling up model size and the integration with large language models have further improved the performance of TTI models, resulting the generation result nearly indistinguishable from realworld images, revolutionizing the way we retrieval images. Our explorative study has incentivised us to think that there are further ways of scaling texttoimage models with the combination of innovative model architectures and prediction enhancement techniques. We have divided the work of this survey into five main sections wherein we detail the frameworks of major literature in order to delve into the different types of texttoimage generation methods. Following this we provide a detailed comparison and critique of these methods and offer possible pathways of improvement for future work. In the future work, we argue that TTI development could yield impressive productivity improvements for creation, particularly in the context of the AIGC era, and could be extended to more complex tasks such as video generation and 3D generation.
OrthomodularValued Models for Quantum Set Theory ; In 1981, Takeuti introduced quantum set theory by constructing a model of set theory based on quantum logic represented by the lattice of closed linear subspaces of a Hilbert space in a manner analogous to Booleanvalued models of set theory, and showed that appropriate counterparts of the axioms of ZermeloFraenkel set theory with the axiom of choice ZFC hold in the model. In this paper, we aim at unifying Takeuti's model with Booleanvalued models by constructing models based on general complete orthomodular lattices, and generalizing the transfer principle in Booleanvalued models, which asserts that every theorem in ZFC set theory holds in the models, to a general form holding in every orthomodularvalued model. One of the central problems in this program is the wellknown arbitrariness in choosing a binary operation for implication. To clarify what properties are required to obtain the generalized transfer principle, we introduce a class of binary operations extending the implication on Boolean logic, called generalized implications, including even nonpolynomially definable operations. We study the properties of those operations in detail and show that all of them admit the generalized transfer principle. Moreover, we determine all the polynomially definable operations for which the generalized transfer principle holds. This result allows us to abandon the Sasaki arrow originally assumed for Takeuti's model and leads to a much more flexible approach to quantum set theory.
Classification Accuracy Score for Conditional Generative Models ; Deep generative models DGMs of images are now sufficiently mature that they produce nearly photorealistic samples and obtain scores similar to the data distribution on heuristics such as Frechet Inception Distance FID. These results, especially on largescale datasets such as ImageNet, suggest that DGMs are learning the data distribution in a perceptually meaningful space and can be used in downstream tasks. To test this latter hypothesis, we use classconditional generative models from a number of model classesvariational autoencoders, autoregressive models, and generative adversarial networks GANsto infer the class labels of real data. We perform this inference by training an image classifier using only synthetic data and using the classifier to predict labels on real data. The performance on this task, which we call Classification Accuracy Score CAS, reveals some surprising results not identified by traditional metrics and constitute our contributions. First, when using a stateoftheart GAN BigGANdeep, Top1 and Top5 accuracy decrease by 27.9 and 41.6, respectively, compared to the original data; and conditional generative models from other model classes, such as VectorQuantized Variational Autoencoder2 VQVAE2 and Hierarchical Autoregressive Models HAMs, substantially outperform GANs on this benchmark. Second, CAS automatically surfaces particular classes for which generative models failed to capture the data distribution, and were previously unknown in the literature. Third, we find traditional GAN metrics such as Inception Score IS and FID neither predictive of CAS nor useful when evaluating nonGAN models. Furthermore, in order to facilitate better diagnoses of generative models, we opensource the proposed metric.
The Utility of General Domain Transfer Learning for Medical Language Tasks ; The purpose of this study is to analyze the efficacy of transfer learning techniques and transformerbased models as applied to medical natural language processing NLP tasks, specifically radiological text classification. We used 1,977 labeled head CT reports, from a corpus of 96,303 total reports, to evaluate the efficacy of pretraining using general domain corpora and a combined general and medical domain corpus with a bidirectional representations from transformers BERT model for the purpose of radiological text classification. Model performance was benchmarked to a logistic regression using bagofwords vectorization and a long shortterm memory LSTM multilabel multiclass classification model, and compared to the published literature in medical text classification. The BERT models using either set of pretrained checkpoints outperformed the logistic regression model, achieving sampleweighted average F1scores of 0.87 and 0.87 for the general domain model and the combined general and biomedicaldomain model. General text transfer learning may be a viable technique to generate stateoftheart results within medical NLP tasks on radiological corpora, outperforming other deep models such as LSTMs. The efficacy of pretraining and transformerbased models could serve to facilitate the creation of groundbreaking NLP models in the uniquely challenging data environment of medical text.
How Faithful is your Synthetic Data Samplelevel Metrics for Evaluating and Auditing Generative Models ; Devising domain and modelagnostic evaluation metrics for generative models is an important and as yet unresolved problem. Most existing metrics, which were tailored solely to the image synthesis setup, exhibit a limited capacity for diagnosing the different modes of failure of generative models across broader application domains. In this paper, we introduce a 3dimensional evaluation metric, alphaPrecision, betaRecall, Authenticity, that characterizes the fidelity, diversity and generalization performance of any generative model in a domainagnostic fashion. Our metric unifies statistical divergence measures with precisionrecall analysis, enabling sample and distributionlevel diagnoses of model fidelity and diversity. We introduce generalization as an additional, independent dimension to the fidelitydiversity tradeoff that quantifies the extent to which a model copies training data a crucial performance indicator when modeling sensitive data with requirements on privacy. The three metric components correspond to interpretable probabilistic quantities, and are estimated via samplelevel binary classification. The samplelevel nature of our metric inspires a novel use case which we call model auditing, wherein we judge the quality of individual samples generated by a blackbox model, discarding lowquality samples and hence improving the overall model performance in a posthoc manner.
Discrepancies in Epidemiological Modeling of Aggregated Heterogeneous Data ; Within epidemiological modeling, the majority of analyses assume a single epidemic process for generating groundtruth data. However, this assumed data generation process can be unrealistic, since data sources for epidemics are often aggregated across geographic regions and communities. As a result, stateoftheart models for estimating epidemiological parameters, e.g.transmission rates, can be inappropriate when faced with complex systems. Our work empirically demonstrates some limitations of applying epidemiological models to aggregated datasets. We generate three complex outbreak scenarios by combining incidence curves from multiple epidemics that are independently simulated via SEIR models with different sets of parameters. Using these scenarios, we assess the robustness of a stateoftheart Bayesian inference method that estimates the epidemic trajectory from viral load surveillance data. We evaluate two datagenerating models within this Bayesian inference framework a simple exponential growth model and a highly flexible Gaussian process prior model. Our results show that both models generate accurate transmission rate estimates for the combined incidence curve at the cost of generating biased estimates for each underlying epidemic, reflecting highly heterogeneous underlying population dynamics. The exponential growth model, while interpretable, is unable to capture the complexity of the underlying epidemics. With sufficient surveillance data, the Gaussian process prior model captures the shape of complex trajectories, but is imprecise for periods of low data coverage. Thus, our results highlight the potential pitfalls of neglecting complexity and heterogeneity in the data generation process, which can mask underlying location and populationspecific epidemic dynamics.
Regression Transformer Concurrent sequence regression and generation for molecular language modeling ; Despite significant progress of generative models in the natural sciences, their controllability remains challenging. One fundamentally missing aspect of molecular or protein generative models is an inductive bias that can reflect continuous properties of interest. To that end, we propose the Regression Transformer RT, a novel method that abstracts regression as a conditional sequence modeling problem. This introduces a new paradigm of multitask language models which seamlessly bridge sequence regression and conditional sequence generation. We thoroughly demonstrate that, despite using a nominalscale training objective, the RT matches or surpasses the performance of conventional regression models in property prediction tasks of small molecules, proteins and chemical reactions. Critically, priming the same model with continuous properties yields a highly competitive conditional generative model that outperforms specialized approaches in a substructureconstrained, propertydriven molecule generation benchmark. Our dichotomous approach is facilitated by a novel, alternating training scheme that enables the model to decorate seed sequences by desired properties, e.g., to optimize reaction yield. In sum, the RT is the first report of a multitask model that concurrently excels at predictive and generative tasks in biochemistry. This finds particular application in propertydriven, local exploration of the chemical or protein space and could pave the road toward foundation models in material design. The code to reproduce all experiments of the paper is available at httpsgithub.comIBMregressiontransformer
Disentangled3D Learning a 3D Generative Model with Disentangled Geometry and Appearance from Monocular Images ; Learning 3D generative models from a dataset of monocular images enables selfsupervised 3D reasoning and controllable synthesis. Stateoftheart 3D generative models are GANs which use neural 3D volumetric representations for synthesis. Images are synthesized by rendering the volumes from a given camera. These models can disentangle the 3D scene from the camera viewpoint in any generated image. However, most models do not disentangle other factors of image formation, such as geometry and appearance. In this paper, we design a 3D GAN which can learn a disentangled model of objects, just from monocular observations. Our model can disentangle the geometry and appearance variations in the scene, i.e., we can independently sample from the geometry and appearance spaces of the generative model. This is achieved using a novel nonrigid deformable scene formulation. A 3D volume which represents an object instance is computed as a nonrigidly deformed canonical 3D volume. Our method learns the canonical volume, as well as its deformations, jointly during training. This formulation also helps us improve the disentanglement between the 3D scene and the camera viewpoints using a novel pose regularization loss defined on the 3D deformation field. In addition, we further model the inverse deformations, enabling the computation of dense correspondences between images generated by our model. Finally, we design an approach to embed real images into the latent space of our disentangled generative model, enabling editing of real images.
T2TD Text3D Generation Model based on Prior Knowledge Guidance ; In recent years, 3D models have been utilized in many applications, such as autodriver, 3D reconstruction, VR, and AR. However, the scarcity of 3D model data does not meet its practical demands. Thus, generating highquality 3D models efficiently from textual descriptions is a promising but challenging way to solve this problem. In this paper, inspired by the ability of human beings to complement visual information details from ambiguous descriptions based on their own experience, we propose a novel text3D generation model T2TD, which introduces the related shapes or textual information as the prior knowledge to improve the performance of the 3D generation model. In this process, we first introduce the text3D knowledge graph to save the relationship between 3D models and textual semantic information, which can provide the related shapes to guide the target 3D model generation. Second, we integrate an effective causal inference model to select useful feature information from these related shapes, which removes the unrelated shape information and only maintains feature information that is strongly relevant to the textual description. Meanwhile, to effectively integrate multimodal prior knowledge into textual information, we adopt a novel multilayer transformer structure to progressively fuse related shape and textual information, which can effectively compensate for the lack of structural information in the text and enhance the final performance of the 3D generation model. The final experimental results demonstrate that our approach significantly improves 3D model generation quality and outperforms the SOTA methods on the text2shape datasets.
ToolAlpaca Generalized Tool Learning for Language Models with 3000 Simulated Cases ; Enabling large language models to utilize realworld tools effectively is crucial for achieving embodied intelligence. Existing approaches to tool learning have either primarily relied on extremely large language models, such as GPT4, to attain generalized tooluse abilities in a zeroshot manner, or utilized supervised learning to train limited scopes of tools on compact models. However, it remains uncertain whether smaller language models can achieve generalized tooluse abilities without toolspecific training. To address this question, this paper introduces ToolAlpaca, a novel framework designed to automatically generate a diverse tooluse corpus and learn generalized tooluse abilities on compact language models with minimal human intervention. Specifically, ToolAlpaca first automatically creates a highly diversified tooluse corpus by building a multiagent simulation environment. The corpus contains 3938 tooluse instances from more than 400 realworld tool APIs spanning 50 distinct categories. Subsequently, the constructed corpus is employed to finetune compact language models, resulting in two models, namely ToolAlpaca7B and ToolAlpaca13B, respectively. Finally, we evaluate the ability of these models to utilize previously unseen tools without specific training. Experimental results demonstrate that ToolAlpaca achieves effective generalized tooluse capabilities comparable to those of extremely large language models like GPT3.5, demonstrating that learning generalized tooluse ability is feasible for compact language models.
Cosmology with a Variable Chaplygin Gas ; We consider a new generalized Chaplygin gas model that includes the original Chaplygin gas model as a special case. In such a model the generalized Chaplygin gas evolves as from dust to quiessence or phantom. We show that the background evolution for the model is equivalent to that for a coupled dark energy model with dark matter. The constraints from the current type Ia supernova data favour a phantomlike Chaplygin gas model.
Approximate NGram Markov Model for Natural Language Generation ; This paper proposes an Approximate ngram Markov Model for bag generation. Directed word association pairs with distances are used to approximate n1gram and ngram training tables. This model has parameters of word association model, and merits of both word association model and Markov Model. The training knowledge for bag generation can be also applied to lexical selection in machine translation design.
Covariant generalization of the ISGW quark model ; A fairly general Lorentzcovariant quark model of mesons is constructed. It has several versions whose nonrelativistic limit corresponds to the wellknown Isgur, Scora, Grinstein, and Wise model. In the heavyquark limit, the covariant model naturally and automatically produces the heavyquark symmetry results for meson decay constants and semileptonic decay form factors. The meson decay constants and the IsgurWise functions are calculated for various versions of the covariant model and compared with other estimates. A general and adaptable structure of the covariant model ensures that it can be used to describe transitions involving light andor heavy mesons.
Yukawa Interaction from a SUSY Composite Model ; We present a composite model that is based on nonperturbative effects of N1 supersymmetric SUNC gauge theory with NfNC1 flavors. In this model, we consider NC7, where all matter fields in the supersymmetric standard model, that is, quarks, leptons and Higgs particles are bound states of preons and antipreons. When SU7H hypercolor coupling becomes strong, Yukawa couplings of quarks and leptons are generated dynamically. We show one generation model at first, and next we show models of three generations.
Some Recent Results from the Generic Supersymmetric Standard Model ; The generic supersymmetric standard model is a model built from a supersymmetrized standard model field spectrum the gauge symmetries only. The popular minimal supersymmetric standard model differs from the generic version in having Rparity imposed by hand. We review an efficient formulation of the model and some of the recently obtained interesting phenomenological features. The latter includes Rparity violating contributions to scalar masses that had been largely overlooked and the related contributions to fermion electric dipole moments and mu to e gamma.
Electric Dipole Moments in the Generic Supersymmetric Standard Model ; The generic supersymmetric standard model is a model built from a supersymmetrized standard model field spectrum the gauge symmetries only. The popular minimal supersymmetric standard model differs from the generic version in having Rparity imposed by hand. We review an efficient formulation of the model and some of the recently obtained interesting phenomenological features, focusing on oneloop contributions to fermion electric dipole moments.
No Chaos in BraneWorld Cosmology ; We discuss the asymptotic dynamical evolution of spatially homogeneous braneworld cosmological models close to the initial singularity. We find that generically the cosmological singularity is isotropic in Bianchi type IX braneworld models and consequently these models do not exhibit Mixmaster or chaoticlike behaviour close to the initial singularity. We argue that this is typical of more general cosmological models in the braneworld scenario. In particular, we show that an isotropic singularity is a pastattractor in all orthogonal Bianchi models and is a local pastattractor in a class of inhomogeneous braneworld models.
General Gauge Mediation ; We give a general definition of gauge mediated supersymmetry breaking which encompasses all the known gauge mediation models. In particular, it includes both models with messengers as well as direct mediation models. A formalism for computing the soft terms in the generic model is presented. Such a formalism is necessary in stronglycoupled direct mediation models where perturbation theory cannot be used. It allows us to identify features of the entire class of gauge mediation models and to distinguish them from specific signatures of various subclasses.
On Convergence to SLE6 I Conformal Invariance for Certain Models of the BondTriangular Type ; Following the approach outlined in 26, convergence to SLE6 of the Exploration Processes for the correlated bondtriangular type models studied in 11 is established. This puts the said models in the same universality class as the standard site percolation model on the triangular lattice 27. In the context of these models, the result is proven for all domains with boundary Minkowski dimension less than two. Moreover, the proof of convergence applies in the context of general critical 2D percolation models and for general domains, under the stipulation that Cardy's Formula can be established for domains in this generality.
Statistical Inference for ValuedEdge Networks Generalized Exponential Random Graph Models ; Across the sciences, the statistical analysis of networks is central to the production of knowledge on relational phenomena. Because of their ability to model the structural generation of networks, exponential random graph models are a ubiquitous means of analysis. However, they are limited by an inability to model networks with valued edges. We solve this problem by introducing a class of generalized exponential random graph models capable of modeling networks whose edges are valued, thus greatly expanding the scope of networks applied researchers can subject to statistical analysis.
The Structure of Signals Causal Interdependence Models for Games of Incomplete Information ; Traditional economic models typically treat private information, or signals, as generated from some underlying state. Recent work has explicated alternative models, where signals correspond to interpretations of available information. We show that the difference between these formulations can be sharply cast in terms of causal dependence structure, and employ graphical models to illustrate the distinguishing characteristics. The graphical representation supports inferences about signal patterns in the interpreted framework, and suggests how results based on the generated model can be extended to more general situations. Specific insights about bidding games in classical auction mechanisms derive from qualitative graphical models.
On a class of growthmaximal hardcore processes ; Generalizing the wellknown lilypond model we introduce a growthmaximal hardcore model based on a spacetime point process of convex particles. Using a purely deterministic algorithm we prove under fairly general assumptions that the model exists and is uniquely determined by the point process. Under an additional stationarity assumption we show that the model does not percolate. Our model generalizes the lilypond model considerably even if all grains are born at the same time. In that case and under a Poisson assumption we prove a central limit theorem in a large volume scenario.
An Exponential FR Dark Energy Model ; We present an exponential FR modified gravity model in the Jordan and the Einstein frame. We use a general approach in order to investigate and demonstrate the viability of the model. Apart from the general features that this models has, which actually render it viable at a first step, we address the issues of finite time singularities, Newton's law corrections and the scalaron mass. As we will evince, the model passes these latter two tests successfully and also has no finite time singularities, a feature inherent to other well studied exponential models.
Asymptotics for regression models under loss of identifiability ; This paper discusses the asymptotic behavior of regression models under general conditions. First, we give a general inequality for the difference of the sum of square errors SSE of the estimated regression model and the SSE of the theoretical best regression function in our model. A set of generalized derivative functions is a key tool in deriving such inequality. Under suitable Donsker condition for this set, we give the asymptotic distribution for the difference of SSE. We show how to get this Donsker property for parametric models even if the parameters characterizing the best regression function are not unique. This result is applied to neural networks regression models with redundant hidden units when loss of identifiability occurs.
A Joint Model for Question Answering and Question Generation ; We propose a generative machine comprehension model that learns jointly to ask and answer questions based on documents. The proposed model uses a sequencetosequence framework that encodes the document and generates a question answer given an answer question. Significant improvement in model performance is observed empirically on the SQuAD corpus, confirming our hypothesis that the model benefits from jointly learning to perform both tasks. We believe the joint model's novelty offers a new perspective on machine comprehension beyond architectural engineering, and serves as a first step towards autonomous information seeking.
Generalized Autoregressive Neural Network Models ; A time series is a sequence of observations taken sequentially in time. The autoregressive integrated moving average is a class of the model more used for times series data. However, this class of model has two critical limitations. It fits well onlyGaussian data with the linear structure of correlation. Here, I present a new model named as generalized autoregressive neural networks, GARNN. The GARNN is an extension of the generalized linear model where the mean marginal depends on the lagged values via the inclusion of the neural network in the link function. A practical application of the model is shown using a wellknown poliomyelitis case number, originated analyzed by Zeger and Qaqish 1988,
Generalized Additive Model Selection ; We introduce GAMSEL Generalized Additive Model Selection, a penalized likelihood approach for fitting sparse generalized additive models in high dimension. Our method interpolates between null, linear and additive models by allowing the effect of each variable to be estimated as being either zero, linear, or a lowcomplexity curve, as determined by the data. We present a blockwise coordinate descent procedure for efficiently optimizing the penalized likelihood objective over a dense grid of the tuning parameter, producing a regularization path of additive models. We demonstrate the performance of our method on both real and simulated data examples, and compare it with existing techniques for additive model selection.
On soliton solutions of the timediscrete generalized lattice Heisenberg magnet model ; Generalized lattice Heisenberg magnet model is an integrable model exhibiting soliton solutions. The model is physically important for describing the magnon bound states or soliton excitations with arbitrary spin, in magnetic materials. In this paper, a timediscrete generalized lattice Heisenberg magnet GLHM model is investigated. By writing down the Lax pair representation of the timediscrete GLHM model, we present explicitly the underlying integrable structure like, the Darboux transformation and soliton solutions.
Augmented Generator Subtransient Model Using Dynamic Phasor Measurements ; In this article, we present a new model for a synchronous generator based on phasor measurement units PMUs data. The proposed subtransient model allows to estimate the dynamic state variables as well as to calibrate model parameters. The motivation for this new model is to use more efficiently the PMU measurements which are becoming widely available in power grids. The concept of phasor derivative is applied, which not only includes the signal phase derivative but also its amplitude derivative. Applying known nonlinear estimation techniques, we study the merits of this new model. In particular, we test robustness by considering a generator with different mechanical power controls.
A Relationship Between SIR Model and Generalized Logistic Distribution with Applications to SARS and COVID19 ; This paper shows that the generalized logistic distribution model is derived from the wellknown compartment model, consisting of susceptible, infected and recovered compartments, abbreviated as the SIR model, under certain conditions. In the SIR model, there are uncertainties in predicting the final values for the number of infected population and the infectious parameter. However, by utilizing the information obtained from the generalized logistic distribution model, we can perform the SIR numerical computation more stably and more accurately. Applications to severe acute respiratory syndrome SARS and Coronavirus disease 2019 COVID19 using this combined method are also introduced.
The Generalization Error of the Minimumnorm Solutions for Overparameterized Neural Networks ; We study the generalization properties of minimumnorm solutions for three overparametrized machine learning models including the random feature model, the twolayer neural network model and the residual network model. We proved that for all three models, the generalization error for the minimumnorm solution is comparable to the Monte Carlo rate, up to some logarithmic terms, as long as the models are sufficiently overparametrized.
Linear Models are Most Favorable among Generalized Linear Models ; We establish a nonasymptotic lower bound on the L2 minimax risk for a class of generalized linear models. It is further shown that the minimax risk for the canonical linear model matches this lower bound up to a universal constant. Therefore, the canonical linear model may be regarded as most favorable among the considered class of generalized linear models in terms of minimax risk. The proof makes use of an informationtheoretic Bayesian Cram'erRao bound for logconcave priors, established by Aras et al. 2019.
Extended Koopman Models ; We introduce two novel generalizations of the Koopman operator method of nonlinear dynamic modeling. Each of these generalizations leads to greatly improved predictive performance without sacrificing a unique trait of Koopman methods the potential for fast, globally optimal control of nonlinear, nonconvex systems. The first generalization, Convex Koopman Models, uses convex rather than linear dynamics in the lifted space. The second, Extended Koopman Models, additionally introduces an invertible transformation of the control signal which contributes to the lifted convex dynamics. We describe a deep learning architecture for parameterizing these classes of models, and show experimentally that each significantly outperforms traditional Koopman models in trajectory prediction for two nonlinear, nonconvex dynamic systems.
In and Equivariance for Optimal Designs in Generalized Linear Models The Gamma Model ; We give an overview over the usefulness of the concept of equivariance and invariance in the design of experiments for generalized linear models. In contrast to linear models here pairs of transformations have to be considered which act simultaneously on the experimental settings and on the location parameters in the linear component. Given the transformation of the experimental settings the parameter transformations are not unique and may be nonlinear to make further use of the model structure. The general concepts and results are illustrated by models with gamma distributed response. Locally optimal and maximin efficient design are obtained for the common D and IMSEcriterion.
SumProductAttention Networks Leveraging SelfAttention in Probabilistic Circuits ; Probabilistic circuits PCs have become the defacto standard for learning and inference in probabilistic modeling. We introduce SumProductAttention Networks SPAN, a new generative model that integrates probabilistic circuits with Transformers. SPAN uses selfattention to select the most relevant parts of a probabilistic circuit, here sumproduct networks, to improve the modeling capability of the underlying sumproduct network. We show that while modeling, SPAN focuses on a specific set of independent assumptions in every product layer of the sumproduct network. Our empirical evaluations show that SPAN outperforms stateoftheart probabilistic generative models on various benchmark data sets as well is an efficient generative image model.
On Johnson's sufficientness postulates for featuressampling models ; In the 1920's, the English philosopher W.E. Johnson introduced a characterization of the symmetric Dirichlet prior distribution in terms of its predictive distribution. This is typically referred to as Johnson's sufficientness postulate, and it has been the subject of many contributions in Bayesian statistics, leading to predictive characterization for infinitedimensional generalizations of the Dirichlet distribution, i.e. speciessampling models. In this paper, we review sufficientness postulates for speciessampling models, and then investigate analogous predictive characterizations for the more general featuressampling models. In particular, we present a sufficientness postulate for a class of featuressampling models referred to as Scaled Processes SPs, and then discuss analogous characterizations in the general setup of featuressampling models.
RITA a Study on Scaling Up Generative Protein Sequence Models ; In this work we introduce RITA a suite of autoregressive generative models for protein sequences, with up to 1.2 billion parameters, trained on over 280 million protein sequences belonging to the UniRef100 database. Such generative models hold the promise of greatly accelerating protein design. We conduct the first systematic study of how capabilities evolve with model size for autoregressive transformers in the protein domain we evaluate RITA models in next amino acid prediction, zeroshot fitness, and enzyme function prediction, showing benefits from increased scale. We release the RITA models openly, to the benefit of the research community.
Application of a General Family of Bivariate Distributions in Modelling Dependent Competing Risks Data with Associated Model Selection ; In this article, a general family of bivariate distributions is used to model competing risks data with dependent factors. The general structure of competing risks data considered here includes ties. A comprehensive inferential framework for the proposed model is presented maximum likelihood estimation, confidence interval construction, and model selection within the bivariate family of distributions for a given dependent competing risks data. The inferential methods are very convenient to implement. Through detailed simulations, the inferential methods are observed to provide quite reasonable results. Analysis of a real data from the Diabetic Retinopathy Study is carried out with the help of the proposed model as an illustrative example.
CLIPDiffusionLM Apply Diffusion Model on Image Captioning ; Image captioning task has been extensively researched by previous work. However, limited experiments focus on generating captions based on nonautoregressive text decoder. Inspired by the recent success of the denoising diffusion model on image synthesis tasks, we apply denoising diffusion probabilistic models to text generation in image captioning tasks. We show that our CLIPDiffusionLM is capable of generating image captions using significantly fewer inference steps than autoregressive models. On the Flickr8k dataset, the model achieves 0.1876 BLEU4 score. By training on the combined Flickr8k and Flickr30k dataset, our model achieves 0.2470 BLEU4 score. Our code is available at httpsgithub.comxushitongdiffusionimagecaptioning.
Replacing Language Model for Style Transfer ; We introduce replacing language model RLM, a sequencetosequence language modeling framework for text style transfer. Our method autoregressively replaces each token in the original sentence with a text span in the target style. In contrast, the new span is generated via a nonautoregressive masked language model. The RLM generation scheme gathers the flexibility of autoregressive models and the accuracy of nonautoregressive models, which bridges the gap between sentencelevel and wordlevel style transfer methods. To further control the style of generated sentences, we conduct a stylecontent disentanglement on the hidden representations of RLM. Empirical results on realworld text style transfer tasks demonstrate the effectiveness of RLM compared with other baselines.
Generative probabilistic matrix model of data with different lowdimensional linear latent structures ; We construct a generative probabilistic matrix model of large data based on mixing of linear latent features distributed following Gaussian and Dirichlet distributions. Key ingredient of our model is that we allow for statistical dependence between the mixing coefficients, as well as latent features with a statistically dependent structure. Latent dimensionality and correlation patterns of the data are controlled by two model parameters. The model's data patterns include overlapping clusters, sparse mixing, and constrained nonnegative mixing. We describe the correlation and the eigenvalue distributions of these patterns. As a possible application of our model, we discuss how it can be used to generate structured training data for supervised learning.
Scorebased Generative Modeling Through Backward Stochastic Differential Equations Inversion and Generation ; The proposed BSDEbased diffusion model represents a novel approach to diffusion modeling, which extends the application of stochastic differential equations SDEs in machine learning. Unlike traditional SDEbased diffusion models, our model can determine the initial conditions necessary to reach a desired terminal distribution by adapting an existing score function. We demonstrate the theoretical guarantees of the model, the benefits of using Lipschitz networks for score matching, and its potential applications in various areas such as diffusion inversion, conditional diffusion, and uncertainty quantification. Our work represents a contribution to the field of scorebased generative learning and offers a promising direction for solving realworld problems.
Teaching the Pretrained Model to Generate Simple Texts for Text Simplification ; Randomly masking text spans in ordinary texts in the pretraining stage hardly allows models to acquire the ability to generate simple texts. It can hurt the performance of pretrained models on text simplification tasks. In this paper, we propose a new continued pretraining strategy to teach the pretrained model to generate simple texts. We continue pretraining BART, a representative model, to obtain SimpleBART. It consistently and significantly improves the results on lexical simplification, sentence simplification, and documentlevel simplification tasks over BART. At the end, we compare SimpleBART with several representative large language models LLMs.
A Rational Model of Dimensionreduced Human Categorization ; Existing models in cognitive science typically assume human categorization as graded generalization behavior in a multidimensional psychological space. However, category representations in these models may suffer from the curse of dimensionality in a natural setting. People generally rely on a tractable yet sufficient set of features to understand the complex environment. We propose a rational model of categorization based on a hierarchical mixture of probabilistic principal components, that simultaneously learn category representations and an economical collection of features. The model captures dimensional biases in human categorization and supports zeroshot learning. We further exploit a generative process within a lowdimensional latent space to provide a better account of categorization with highdimensional stimuli. We validate the model with simulation and behavioral experiments.
Postmodelselection prediction for GLM's ; We give two prediction intervals PI for Generalized Linear Models that take model selection uncertainty into account. The first is a straightforward extension of asymptotic normality results and the second includes an extra optimization that improves nominal coverage for smalltomoderate samples. Both PI's are wider than would be obtained without incorporating model selection uncertyainty. We compare these two PI's with three other PI's. Two are based on bootstrapping procedures and the third is based on a PI from Bayes model averaging. We argue that for general usage either the asymptotic normality or optimized asymptotic normality PI's work best. In an Appendix we extend our results to Generalized Linear Mixed Models.
EzGal A Flexible Interface for Stellar Population Synthesis Models ; We present EzGal, a flexible python program designed to easily generate observable parameters magnitudes, colors, masstolight ratios for any stellar population synthesis SPS model. As has been demonstrated by various authors, the choice of input SPS models can be a significant source of systematic uncertainty. A key strength of EzGal is that it enables simple, direct comparison of different models sets. EzGal is also capable of generating composite stellar population models CSPs and can interpolate between metallicities for a given model set. We have created a web interface to run EzGal and generate observables for a variety of star formation histories and model sets. We make many commonly used SPS models available from this interface; the BC03 models, an updated version of these models, the Maraston models, the BaSTI models, and finally the FSPS models. We use EzGal to compare magnitude predictions for the model sets as a function of wavelength, age, metallicity, and star formation history. We recover the wellknown result that the models agree best in the optical for old, solar metallicity models, with differences at the 0.1 magnitude level. The most problematic regime for SPS modeling is for young ages 2 Gyrs and long wavelengths lambda 7500 Angstroms where scatter between models can vary from 0.3 mags Sloan i to 0.7 mags Ks. We find that these differences are best understood as general uncertainties in SPS modeling. Finally we explore a more physically motivated example by generating CSPs with a star formation history matching the global star formation history of the universe. We demonstrate that the wavelength and age dependence of SPS model uncertainty translates into a redshift dependent model uncertainty, highlighting the importance of a quantitative understanding of model differences when comparing observations to models as a function of redshift.
Generalrelativistic Model of Magnetically Driven Jet ; The general scheme for the construction of the generalrelativistic model of the magnetically driven jet is suggested. The method is based on the usage of the 31 MHD formalism. It is shown that the critical points of the flow and the explicit radial behavior of the physical variables may be derived through the jet profile function.
Multilinear generating functions for Charlier polynomials ; Charlier configurations provide a combinatorial model for Charlier polynomials. We use this model to give a combinatorial proof of a multilinear generating function for Charlier polynomials. As special cases of the multilinear generating function, we obtain the bilinear generating function for Charlier polynomials and formulas for derangements.