text
stringlengths 0
1.22M
|
---|
A supernova remnant coincident with the slow X-ray pulsar AX J1845–0258
B. M. Gaensler11affiliation: Center for Space Research, Massachusetts Institute of Technology,
70 Vassar Street, Cambridge, MA 02139; [email protected] 22affiliation: Hubble Fellow , E. V. Gotthelf33affiliation: Columbia Astrophysics Laboratory, Columbia University, 550 West 120th Street,
New York, NY 10027; [email protected]
and G. Vasisht44affiliation: Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove
Drive, Pasadena, CA 91109; [email protected]
Abstract
We report on Very Large Array observations in the direction of the
recently-discovered slow X-ray pulsar AX J1845–0258. In the resulting images, we
find a $5^{\prime}$ shell of radio emission; the shell is linearly polarized
with a non-thermal spectral index. We class this source as a previously
unidentified, young ($<8000$ yr), supernova remnant (SNR), G29.6+0.1, which
we propose is physically associated with AX J1845–0258. The young age of
G29.6+0.1 is then consistent with the interpretation that anomalous X-ray
pulsars (AXPs) are isolated, highly magnetized neutron stars
(“magnetars”). Three of the six known AXPs can now be associated
with SNRs; we conclude that AXPs are young ($\lesssim$10 000 yr) objects,
and that they are produced in at least 5% of core-collapse supernovae.
ISM: individual (G29.6+0.1) –
ISM: supernova remnants –
pulsars: individual (AX J1845–0258) –
radio continuum: ISM –
stars: neutron –
X-rays: stars
\submitted
(To appear in The Astrophysical Journal Letters)
1 Introduction
It is becoming increasingly apparent that isolated neutron stars come in
many flavors besides traditional radio pulsars. In recent years, the
neutron star zoo has widened to include $\sim$10 radio-quiet neutron
stars (bj98 ), six anomalous X-ray pulsars (AXPs; mer99 )
and four soft $\gamma$-ray repeaters (SGRs; kou99 ). There is
much uncertainty and debate as to the nature of these sources; one way
towards characterizing their properties is through associations with supernova
remnants (SNRs). An association with a SNR gives an independent estimate
of a neutron star’s age and distance, while the position of the neutron
star with respect to the SNR’s center can be used to estimate the
transverse space velocity of the compact object.
A case in point are the AXPs. Some authors propose that the AXPs
are accreting systems (van Paradijs (Taam); ms95 ; Ghosh (Angelini)), while others argue that AXPs are “magnetars”, isolated
neutron stars with very strong magnetic fields, $B\gtrsim 10^{14}$ G
(td96b ; hh97 ; mel99 ). However, the association
of the AXP 1E 1841–045 with the very young ($\lesssim$2 kyr) SNR G27.4+0.0
(vg97 ) makes the case that 1E 1841–045 is a young object. Assuming
that the pulsar was born spinning quickly, it is difficult to see how
accretion could have slowed it down to its current period in such a short
time. This result thus favors the magnetar model for 1E 1841–045, and indeed
the magnetic field inferred from its period and period derivative,
and assuming standard pulsar spin-down, is
$B\approx 8\times 10^{14}$ G.
AX J1845–0258 (also called AX J1844.8–0258)
is a 6.97 sec X-ray pulsar, found serendipitously in
an ASCA observation of the (presumably unassociated)
SNR G29.7–0.3 (gv98 , hereafter GV98; tkk+98 , hereafter
T98). The long pulse period, low Galactic latitude and soft spectrum of
AX J1845–0258 led GV98 and T98 to independently propose that this source is an
AXP (a conclusion which still needs to be confirmed through measurement
of a period derivative). The high hydrogen column density
inferred from photoelectric absorption ($N_{H}\approx 10^{23}$ cm${}^{-2}$)
suggests that AX J1845–0258 is distant; T98 put it in the Scutum arm, with
a consequent distance of 8.5 kpc, while GV98 nominate 15 kpc.
Because AX J1845–0258 was discovered at the very edge of the ASCA GIS
field-of-view, its position from these observations could only be
crudely estimated, with an uncertainty of $\sim 3^{\prime}$.
A subsequent
(1999 March) 50 ks on-axis ASCA observation has since been carried out
(vgtg99 ). No pulsations are seen in
these data, but a faint point source, AX J184453.3–025642, is detected within
the error circle for AX J1845–0258.
Vasisht et al. (1999) determine an accurate position for AX J184453.3–025642,
and argue that it corresponds to AX J1845–0258 in a quiescent state.
Significant variations in the flux density of AX J1845–0258 were also reported
by T98.
The region containing AX J1845–0258 has been surveyed at 1.4 GHz as part of the
NVSS (ccg+98 ). An image from this survey shows a $\sim 5^{\prime}$ shell
near the position of the pulsar. We here report on multi-frequency
polarimetric observations of this radio shell, at substantially
higher sensitivity and spatial resolution than offered by the NVSS.
Our observations and analysis are described in §2,
and the resulting images are presented in §3. In
§4 we argue that the radio shell coincident with AX J1845–0258 is a new SNR, and consider the likelihood of an association between the
two sources.
2 Observations and Data Reduction
Radio observations of the field of AX J1845–0258 were made with the
D-configuration of the Very Large Array (VLA) on 1999 March 26. The total
observing time was 6 hr, of which 4.5 hr was spent observing in the 5 GHz
band, and the remainder in the 8 GHz band. 5 GHz observations consisted
of a 100 MHz bandwidth centered on 4.860 GHz; 8 GHz observations were
similar, but centered on 8.460 GHz. Amplitudes were calibrated by
observations of 3C 286, assuming flux densities of 7.5 and 5.2 Jy at
5 GHz and 8 GHz respectively. Antenna gains and instrumental polarization
were calibrated using regular observations of MRC 1801+010. Four Stokes
parameters (RR, LL, RL, LR) were recorded in all observations. To cover
the entire region of interest, observations were carried out in a mosaic
of 2 (3) pointings at 5 (8) GHz.
Data were edited and calibrated in the MIRIAD package. In total
intensity (Stokes $I$), mosaic images of the field were formed using
uniform weighting and maximum entropy
deconvolution. The resulting images were then corrected for both the
mean primary beam response of the VLA antennas and the mosaic pattern. The
resolution and noise in the final images are given in Table 1.
Images of the region were also formed in Stokes $Q$, $U$ and $V$. These
images were made using natural weighting to give maximum sensitivity, and
then deconvolved using a joint maximum entropy technique (Sault (Bock)).
At each of 5 and 8 GHz, a linear polarization image $L$ was formed from
$Q$ and $U$. Each $L$ image was clipped where the polarized emission
or the total intensity was less than 5$\sigma$.
In order to determine a spectral index from these data, it is important to
ensure that the images contain the same spatial scales. We thus
spatially filtered each total intensity image (see gbm+98 ),
removing structure on scales larger than $5\arcmin$ and smoothing each
image to a resolution of $15\arcsec$. The spatial distribution of
spectral index was then determined using the method of “T–T”
(temperature-temperature) plots (tpkp62 ; gbm+98 ).
3 Results
Total intensity images of the region are shown in Figure 1.
At both 5 and 8 GHz, a distinct shell of emission is seen, which
we designate G29.6+0.1; observed properties are given in Table 1.
The shell is clumpy, with a particularly bright
clump on its eastern edge. In the east the shell is quite thick (up
to 50% of the radius), while the north-western rim is brighter and narrower.
Two point sources can be seen within the shell interior. At 5 GHz,
the shell appears to be sitting upon a plateau of diffuse extended emission;
this emission is resolved out at 8 GHz.
Significant linear polarization at 5 GHz is seen from
much of the shell, particularly in the two brightest parts of
the shell on the eastern and western edges. Where
detected, the fractional polarization is 2–20%. At 8 GHz,
linear polarization is seen only from these two regions, with fractional
polarization 5–40%. No emission was detected in
Stokes $V$, except for instrumental effects associated with the offset
of the VLA primary beam between left- and right-circular polarization.
Meaningful T–T plots were obtained for three regions of the SNR,
as marked in Figure 1; the spectral index, $\alpha$ ($S_{\nu}\propto\nu^{\alpha}$), for each region is marked. There appear
to be distinct variations in spectral index around the shell,
but all three determinations fall in the range $-0.7\lesssim\alpha\lesssim-0.4$.
Two point sources are visible within the field. The more northerly of
the two is at $18^{\rm h}44^{\rm m}55\farcs 11$,
$-02\arcdeg 55\arcmin 36\farcs 9$ (J2000), with $S_{\rm 5\,GHz}=0.8\pm 0.1$ mJy and $\alpha=+0.5\pm 0.3$, while the other is at
$18^{\rm h}44^{\rm m}50\farcs 59$, $-02\arcdeg 57\arcmin 58\farcs 5$ (J2000)
with $S_{\rm 5\,GHz}=2.0\pm 0.3$ mJy and $\alpha=-0.4\pm 0.1$.
Positional uncertainties for both sources are $\approx 0\farcs 3$ in
each coordinate.
No emission is detected from either source in Stokes $Q$, $U$ or $V$.
4 Discussion
The source G29.6+0.1 is significantly linearly polarized and has a non-thermal
spectrum. Furthermore, the source has a distinct shell
morphology, and shows no significant
counterpart in 60 $\mu$m IRAS data.
These are all the characteristic properties of supernova remnants
(e.g. wg96 ),
and we thus classify G29.6+0.1 as a previously unidentified SNR.
4.1 Physical Properties of G29.6+0.1
Distances to SNRs are notoriously difficult to determine. The
purported $\Sigma-D$ relation has extremely large uncertainties, and
this source is most likely too faint to show H i absorption. So while
we cannot determine a distance to G29.6+0.1 directly, we can attempt to estimate its
distance by associating it with other objects in the region. Indeed
hydrogen recombination lines from extended thermal material have been
detected from the direction of G29.6+0.1 (Lockman (Pisano)), at systemic
velocities of $+42$ and $+99$ km s${}^{-1}$.
Adopting a standard model for Galactic rotation (Fich (Blitz)),
these velocities correspond to possible distances of 3, 6, 9 or 12 kpc,
a result which is not particularly constraining.
Nevertheless,
G29.6+0.1 is of sufficiently small angular size that we can put an upper limit
on its age simply by assuming that it is located within the Galaxy. At
a maximum distance of 20 kpc, the SNR is $27.5\pm 1.5$ pc across. For a
uniform ambient medium of density $n_{0}$ cm${}^{-3}$, the SNR has then
swept up $(260\pm 40)n_{0}$ $M_{\sun}$ from the ISM which, for typical ejected
masses and ambient densities, corresponds to a SNR which has almost
completed the transition from free expansion to the adiabatic
(Sedov-Taylor) phase (see e.g. dj96 ). Thus expansion
in the adiabatic phase acts as an upper limit, and
we can derive a maximum age for
G29.6+0.1 of $(13\pm 4)\left(n_{0}/E_{51}\right)^{1/2}$ kyr,
where $E_{51}$ is the kinetic energy of the explosion in units of
$10^{51}$ erg. For a typical value $n_{0}/E_{51}=0.2$ (Frail (Goss)), we find
that the age of the SNR must be less than 8 kyr. For distances nearer than
20 kpc, the SNR is even younger. For example, at a distance of
10 kpc, the SNR has swept up sufficiently little material from the ISM
that it is still freely expanding, and an expansion velocity
5000 km s${}^{-1}$ then corresponds to an age 1.4 kyr.
4.2 An association with AX J1845–0258?
G29.6+0.1 is a young SNR in the vicinity of a slow X-ray pulsar. If the
two can be shown to be associated, and if we assume that AX J1845–0258 was
born spinning rapidly, then the youth of the system argues that AX J1845–0258 has slowed down to its current period via electromagnetic braking rather
than accretion torque, and that it is thus best interpreted as
a magnetar (cf. vg97 ). Indeed if one assumes that the source has
slowed down through the former process, its inferred dipole magnetic field
is $\sim 9t_{3}^{-1/2}\times 10^{14}$ G, for an age
$t_{3}$ kyr. For ages in the range 1.4–8 kyr (§4.1
above), this results in a field in the range $(3-8)\times 10^{14}$ G,
typical of other sources claimed to be magnetars.
But are the two sources associated? Associations between neutron
stars and SNRs are judged on various criteria, including agreements in
distance and in age, positional coincidence, and evidence for interaction.
Age and distance are the most fundamental of these, but unfortunately
existing data on AX J1845–0258 provide no constraints on an age, and suggest
only a very approximate distance of $\sim$10 kpc (GV98; T98).
The source AX J184453.3–025642 (vgtg99 ) is located well within the confines
of G29.6+0.1, less than $40\arcsec$ from the center of the remnant (see
Figure 1). Vasisht et al. (1999) argue that AX J1845–0258 and
AX J184453.3–025642 are the same source; if we assume that this source is associated
with the SNR and was born at the remnant’s center, then we can infer an
upper limit on its transverse velocity of $1900d_{10}/t_{3}$ km s${}^{-1}$, where
the distance to the system is $10d_{10}$ kpc. In §4.1 we
estimated $d_{10}/t_{3}\sim 0.3-0.7$, and so the velocity inferred
falls comfortably within the range seen for the radio pulsar population
(e.g. ll94 ; cc98 ) Alternatively, if we assume a
transverse velocity of $400v_{400}$ km s${}^{-1}$, we can infer an age
for the system of $<5d_{10}/v_{400}$ kyr, consistent with the
determinations above. There is no obvious radio counterpart to the
X-ray pulsar — both radio point sources in the region are outside all
of the X-ray error circles. At the position of
AX J184453.3–025642, we set a 5$\sigma$ upper limit of
1 mJy on the 5 GHz flux density of any point source.
We also need to consider the possibility that the positional alignment
of AX J184453.3–025642 and G29.6+0.1 is simply by chance. The region
is a complex part of the Galactic Plane — there are 15
catalogued SNRs within 5°— and it seems reasonable in such
a region that unassociated SNRs and neutron stars could lie along the
same line of sight (gj95c ). Many young radio pulsars have no
associated SNR (Braun (Goss)), so there is no reason to demand that
even a young neutron star be associated with a SNR.
The first quadrant of the Galaxy is not well-surveyed for SNRs, so
we estimate the likelihood of a chance association by considering the
fourth quadrant, which has been thoroughly surveyed for SNRs by Whiteoak
& Green (1996). In a representative region of the sky
defined by $320^{\circ}\leq l\leq 355^{\circ}$ and $|b|\leq 1.5^{\circ}$,
we find 44 SNRs in their catalogue. Thus for the $\sim$10 radio-quiet
neutron stars, AXPs and SGRs at comparable longitudes and latitudes,
there is a probability $1.6\times 10^{-3}$ that at least one will lie
within $40\arcsec$ of the center of a known SNR by chance. Of course in
the present case we have carried out a targeted search towards a given
position, and so the probability of spatial coincidence is somewhat
higher than for a survey; nevertheless, we regard it unlikely
that AX J184453.3–025642 should lie so close to the center of an unrelated SNR, and
hence propose that the pulsar and the SNR are genuinely associated.
There is good evidence that magnetars power radio synchrotron
nebulae through the injection of relativistic particles into their
environment (kfk+94 ; Frail (Kulkarni)). The two such sources known
are filled-center nebulae with spectral indices $\alpha\sim-0.7$,
and in one case the neutron star is substantially offset from the core
of its associated nebula (hkc+99 ). In Figure 1,
the clump of emission with peak at $18^{\rm h}44^{\rm m}56^{\rm s}$, $-02\arcdeg 57\arcmin$ (J2000) has such properties, and one
can speculate that it corresponds to such a source. Alternatively,
compact steep-spectrum features are seen in other shell SNRs, and may be
indicative of deceleration of the shock in regions where it is expanding
into a dense ambient medium (dbwg91 ; gbm+98 ).
5 Conclusions
Radio observations of the field of the slow X-ray pulsar AX J1845–0258 reveal
a linearly polarized non-thermal shell, G29.6+0.1, which we classify as
a previously undiscovered supernova remnant. We infer that G29.6+0.1 is
young, with an upper limit on its age of 8000 yr. The proposed quiescent
counterpart of AX J1845–0258, AX J184453.3–025642, is almost at the center of G29.6+0.1, from which
we argue that the pulsar and SNR were created in the same supernova
explosion. The young age of the system provides further evidence that
anomalous X-ray pulsars are isolated magnetars rather than accreting
systems, although we caution that the apparent flux variability
of AX J1845–0258 raises questions over
both its classification as an AXP and its
positional coincidence with G29.6+0.1.
Future X-ray measurements should be able to clarify the situation.
There are now six known AXPs, three of which have been
associated with SNRs.
In every case the pulsar is at or near the
geometric center of its SNR. This result is certainly consistent with
AXPs being young, isolated neutron stars, as argued by the magnetar
hypothesis. If one considers the radio pulsar population, the fraction
of pulsars younger than a given age which can be convincingly
associated with SNRs drops as the age threshold increases.
The age below
which 50% of pulsars have good SNR associations
is $\sim$20 kyr, and for
several of these the pulsar is significantly offset from the center of
its SNR (e.g. fk91 ; fggd96 ). Thus if the SNRs associated
with both AXPs and radio pulsars come from similar explosions and
evolve into similar environments, this seems good evidence that AXPs
are considerably younger than 20 kyr.
Indeed all of the three SNRs associated with AXPs have
ages $<$10 kyr (gv97 ;
pof+98 ; §4.1 of this paper). While the sample of
AXPs is no doubt incomplete, this implies a Galactic
birth-rate for AXPs of $>$0.6 kyr${}^{-1}$. This corresponds to
$(5\pm 2)$% of core-collapse supernovae (ctt+97 ), or
3%–20% of the radio pulsar population (lml+98 ; bj98 ).
There is mounting evidence that soft $\gamma$-ray repeaters (SGRs) are
also magnetars (ksh+99 ). However of the four known SGRs, two
(0526–66 and 1627–41) are on the edge of young SNRs (cdt+82 ;
Smith (Bradt)), a third (1900+14) is on the edge of an old SNR
(vkfg94 ), and the fourth (1806–20) has no associated SNR blast
wave (kfk+94 ). This suggests that SGRs represent an older, or
higher velocity, manifestation of magnetars than do AXPs.
B.M.G. thanks Bob Sault for advice on calibration.
The National Radio Astronomy Observatory is a facility of the National
Science Foundation operated under cooperative agreement by Associated
Universities, Inc.
B.M.G. acknowledges the support of NASA through Hubble Fellowship grant
HF-01107.01-98A awarded by the Space Telescope Science Institute, which
is operated by the Association of Universities for Research in
Astronomy, Inc., for NASA under contract NAS 5–26555.
E.V.G. & G.V.’s research is supported by NASA LTSA grant NAG5–22250
References
Braun (Goss)
Braun, R., Goss, W. M., & Lyne, A. G. 1989, ApJ, 340, 355.
(Brazier & Johnston 1999)
Brazier, K. T. S. & Johnston, S. 1999, MNRAS, 305, 671.
(Cappellaro et al. 1997)
Cappellaro, E., Turatto, M., Tsvetkov, D. Y., Bartunov, O. S., Pollas, C.,
Evans, R., & Hamuy, M. 1997, A&A, 322, 431.
(Cline et al. 1982)
Cline, T. L. et al. 1982, ApJ, 255, L45.
(Condon et al. 1998)
Condon, J. J., Cotton, W. D., Greisen, E. W., Yin, Q. F., Perley, R. A.,
Taylor, G. B., & Broderick, J. J. 1998, AJ, 115, 1693.
(Cordes & Chernoff 1998)
Cordes, J. M. & Chernoff, D. F. 1998, ApJ, 505, 315.
(Dohm-Palmer & Jones 1996)
Dohm-Palmer, R. C. & Jones, T. W. 1996, ApJ, 471, 279.
(Dubner et al. 1991)
Dubner, G. M., Braun, R., Winkler, P. F., & Goss, W. M. 1991, AJ, 101, 1466.
Fich (Blitz)
Fich, M., Blitz, L., & Stark, A. A. 1989, ApJ, 342, 272.
(Frail et al. 1996)
Frail, D. A., Giacani, E. B., Goss, W. M., & Dubner, G. 1996, ApJ,
464, L165.
Frail (Goss)
Frail, D. A., Goss, W. M., & Whiteoak, J. B. Z. 1994, ApJ, 437, 781.
(Frail & Kulkarni 1991)
Frail, D. A. & Kulkarni, S. R. 1991, Nature, 352, 785.
Frail (Kulkarni)
Frail, D. A., Kulkarni, S. R., & Bloom, J. S. 1999, Nature, 398, 127.
(Gaensler et al. 1999)
Gaensler, B. M., Brazier, K. T. S., Manchester, R. N., Johnston, S., & Green,
A. J. 1999, MNRAS, 305, 724.
(Gaensler & Johnston 1995)
Gaensler, B. M. & Johnston, S. 1995, MNRAS, 277, 1243.
Ghosh (Angelini)
Ghosh, P., Angelini, L., & White, N. E. 1997, ApJ, 478, 713.
(Gotthelf & Vasisht 1997)
Gotthelf, E. V. & Vasisht, G. 1997, ApJ, 486, L133.
(Gotthelf & Vasisht 1998)
Gotthelf, E. V. & Vasisht, G. 1998, New Astron., 3, 293 (GV98).
(Heyl & Hernquist 1997)
Heyl, J. S. & Hernquist, L. 1997, ApJ, 489, L67.
(Hurley et al. 1999)
Hurley, K., Kouvelioutou, C., Cline, T., Mazets, E., Golenetskii, S.,
Frederiks, D. D., & van Paradijs, J. 1999, ApJ, 523, L37.
(Kouveliotou 1999)
Kouveliotou, C. 1999, BAAS, 193, 56.02.
(Kouveliotou et al. 1999)
Kouveliotou, C. et al. 1999, ApJ, 510, L115.
(Kulkarni et al. 1994)
Kulkarni, S. R., Frail, D. A., Kassim, N. E., Murakami, T., & Vasisht, G.
1994, Nature, 368, 129.
Lockman (Pisano)
Lockman, F. J., Pisano, D. J., & Howard, G. J. 1996, ApJ, 472, 173.
(Lyne & Lorimer 1994)
Lyne, A. G. & Lorimer, D. R. 1994, Nature, 369, 127.
(Lyne et al. 1998)
Lyne, A. G. et al. 1998, MNRAS, 295, 743.
(Melatos 1999)
Melatos, A. 1999, ApJ, 519, L77.
(Mereghetti 1999)
Mereghetti, S. 1999, Mem. S. A. It., 69, 819.
(Mereghetti & Stella 1995)
Mereghetti, S. & Stella, L. 1995, ApJ, 442, L17.
(Parmar et al. 1998)
Parmar, A. N., Oosterbroek, T., Favata, F., Pightling, S., Coe,
M. J., Mereghetti, S., & Israel, G. L. 1998, A&A, 330, 175.
Sault (Bock)
Sault, R. J., Bock, D. C.-J., & Duncan, A. R. 1999, A&ASS, .
in press.
Smith (Bradt)
Smith, D. A., Bradt, H. V., & Levine, A. M. 1999, ApJ, 519, L147.
(Thompson & Duncan 1996)
Thompson, C. & Duncan, R. C. 1996, ApJ, 473, 322.
(Torii et al. 1998)
Torii, K., Kinugasa, K., Katayama, K., Tsunemi, H., & Yamauchi, S. 1998,
ApJ, 503, 843 (T98).
(Turtle et al. 1962)
Turtle, A. J., Pugh, J. F., Kenderdine, S., & Pauliny-Toth, I. I. K. 1962,
MNRAS, 124, 297.
van Paradijs (Taam)
van Paradijs, J., Taam, R. E., & van den Heuvel, E. P. J. 1995, A&A, 299, L41.
(Vasisht & Gotthelf 1997)
Vasisht, G. & Gotthelf, E. V. 1997, ApJ, 486, L129.
(Vasisht et al. 1999)
Vasisht, G., Gotthelf, E. V., Torii, K., & Gaensler, B. M. 1999, ApJ, .
submitted.
(Vasisht et al. 1994)
Vasisht, G., Kulkarni, S. R., Frail, D. A., & Greiner, J. 1994, ApJ, 431, L35.
(Whiteoak & Green 1996)
Whiteoak, J. B. Z. & Green, A. J. 1996, A&ASS, 118, 329. |
From yeV to TeV: Search for the Neutron Electric Dipole Moment
D. H. Beck\fromins:uiuc
D. Budker\fromins:berkeley\fromins:LBNL
and B. K. Park\fromins:berkeley
for the nEDM Collaboration\fromins:nEDM
ins:uiucins:uiucins:berkeleyins:berkeleyins:LBNLins:LBNLins:nEDMins:nEDM
Abstract
The existence of electric dipole moments (EDM) for fundamental particles signals time-reversal symmetry (T) violation accompanied by violation of parity (P); only upper limits have been established to date. Time-reversal violation in turn implies CP violation under the assumption that CPT is a good symmetry. The neutron is an attractive system for an
EDM search, both because it is neutral and because a neutron EDM would be relatively easier to interpret than the comparable quantity for a nucleus or even an atom. We introduce briefly the key experimental requirements for such search and describe some aspects of the neutron EDM experiment planned for the Spallation Neutron Source at the U.S. Oak Ridge National Laboratory.
\instlist
Department of Physics, University of Illinois at Urbana-Champaign, Urbana, IL, USA
Department of Physics, University of California Berkeley, Berkeley, CA 94720-7300, USA
Nuclear Science Division, E. O. Lawrence National Laboratory, Berkeley, CA 94720, USA
http://www.phy.ornl.gov/nedm/
\PACSes\PACSit14.20.DhNeutron properties
1 Introduction
Searches for electric dipole moments (EDM) of fundamental particles have a history that dates back to the 1950’s. In this era, the violation of parity was first postulated and then observed in weak-interaction experiments of Wu, et al. [1], marking the beginning of the demise of discrete symmetries in general (so far, with the exception of CPT invariance). The first EDM search for the neutron was done early in the decade by Smith, Purcell and Ramsey as search for parity violation in a scattering experiment at Oak Ridge. Although the measurements were completed by 1951, the paper was not published until 1957 [2], after the publication of the Lee and Yang paper [3] on the weak interaction, because the upper limit determined, $\left|d_{n}\right|\leq 3.9\times 10^{-20}$ e$\cdot$cm (all 90% C.L.), was not thought to be interesting at the time.
In fact, upper limits on EDM have continued to limit theory in important ways over the intervening 60 years. The CP violation in the standard model generates very small EDM, typically coming in only at the level of a third order loop [4]. Therefore, in the search for new physics, EDM are attractive because the standard model ‘background’ is several orders of magnitude below currently observed limits. There are two prominent threads that argue for continued searches. First, in 1967 Sakharov argued [5] that any process that could generate the matter-antimatter asymmetry observed in the universe must involve CP violation; the aforementioned CP violation in the standard model is much too small to account for the observed asymmetry [6]. Second, in the currently popular models of physics beyond the standard model, e.g., SUSY, there is much more opportunity for the necessary complex couplings (analogous to those in the CKM matrix) in the SUSY breaking mechanisms of our low-energy world. The current set of EDM upper limits has significantly restricted the space of possible SUSY models [7]. With these experiments, we are therefore exploring aspects of physics beyond the standard model that pertain to the TeV-scale energies, and in ways that complement the direct searches at the LHC.
2 EDM Experiments
Most EDM experiments look for precession of a fundamental particle’s angular momentum in an external electric field. The interaction energy of the electric dipole in the electric field combines with that of the T-allowed magnetic moment in an external magnetic field in a way that either increases or decreases the normal precession frequency. These effects are very small; the interaction energy at the sensitivity limit of our proposed Oak Ridge experiment is about $10^{-23}$ eV or 10 yocto-electron volts (yeV); about ten orders of magnitude smaller than the corresponding magnetic energy. We next survey a few of the key experiments and techniques in the field.
A landmark EDM experiment was carried out at Berkeley by Eugene Commins’ group. Taking advantage of the relativistic enhancement of atomic EDM in heavy atoms [8], Commins used an atomic beam of polarized (paramagnetic) thallium to measure an EDM which is mostly sensitive to an EDM of the electron. It is essentially a variant of the Ramsey separated oscillatory field method [9] wherein the magnetic moments of optically polarized atoms are flipped by a first $\pi/2$ pulse, precess freely in a region where the magnetic and electric fields are either parallel or anti-parallel, and then are analyzed by applying a second, synchronous $\pi/2$ pulse. The resulting polarization is then probed by the same laser used to produce the initial polarization. The Commins group measured an upper limit $\left|d_{e}\right|\leq 1.5\times 10^{-27}$ e$\cdot$cm [10].
Essentially the same limit of $\left|d_{e}\right|\leq 1.05\times 10^{-27}$ e$\cdot$cm [11] has been obtained recently by the Hinds group at Imperial College in an experiment otherwise similar, but taking the advantage of the large internal electric field in the polar YbF molecule.
The record for the smallest EDM upper limit is held by the Hg experiment in Seattle. In this diamagnetic atom, the atomic EDM is most sensitive to the intrinsic nuclear EDM, and there is enhancement from the finite size of the nucleus. The experiment uses a pair of cells with the same nominal magnetic field, but opposite electric fields, and a second pair of cells with no electric field acting as a co-magnetometer. The atoms are first optically pumped with a circularly polarized laser beam propagating perpendicular to the magnetic field and modulated at the Larmor frequency. After the atoms are polarized, the light polarization is switched to linear, and a linear polarizer analyzes optical rotation in the transmitted light induced by the precessing atoms. The limit on the ${}^{199}$Hg EDM is $\left|d_{Hg}\right|\leq 2.6\times 10^{-29}$ e$\cdot$cm [12].
The current limit on the neutron EDM is held by the ILL experiment. It utilizes ultra-cold neutrons (UCN) with energies $\raisebox{-2.58pt}{$\,\stackrel{{\scriptstyle\raisebox{-0.86pt}{$\textstyle<$}%
}}{{\sim}}\,$}110$ neV, which can be trapped by the Fermi potentials of certain materials like SiO${}_{2}$ and diamond-like carbon (the UCN-storage possibility was first envisioned by Zel’dovich in 1959 [13], see also Ref. [14]). The UCN are produced by slowing cold neutrons via scattering from a receding turbine blade. After the neutrons are polarized by transmission through a magnetized foil, they precess in a cell with parallel or anti-parallel $\vec{E}$ and $\vec{B}$ fields and are analyzed in a Ramsey separated oscillatory field experiment as described above. The limit set in this experiment is $\left|d_{n}\right|\leq 2.9\times 10^{-26}$ e$\cdot$cm [15].
New n-EDM experiments are currently being developed for neutron facilities at ILL, PSI, Munich, TRIUMF, and the Oak Ridge Spallation Neutron Source. All primarily seek to increase the UCN density significantly above the $1$ cm${}^{-3}$ in the first ILL experiment. Additional co-magnetometry is also considered to be an important improvement to help circumvent the geometric-phase systematic uncertainty [16, 17] that ultimately limited the ILL experiment described above.
3 Neutron EDM experiment at the SNS
3.1 General description
The sensitivity limit of all measurements of electric dipole moments is, up to a numerical factor
{eqnletter}
δd ∼1E 1N τT,
where $E$ is the electric field, $\tau$ is the coherence time and $T$ the overall measurement time. The obvious factors to attack are therefore $E$, $N$ and $\tau$.
In order to increase $N$, this experiment (see Fig. 1) uses the downscattering of neutrons by resonant creation of the phonon Landau-Feynman excitations in superfluid ${}^{4}$He at $T=0.3-0.5$ K. Cold neutrons with wavelengths of 8.9 Å, produced either by a graphite monochromator or a chopper, enter the experiment from a standard polarizing guide. The wavelength corresponds to the crossing of the dispersion curves of the neutrons and superfluid elementary excitations, enabling the neutrons to efficiently give up their energy and momentum to the phonons/rotons [18], which are eventually absorbed by the cold walls of the container. In fact, thus scattered neutrons are at a temperature much below that of the bath (the upscattering rate is small) and have an energy below that of the Fermi potential associated with the polystyrene coated acrylic container for the superfluid. In this manner, neutron densities of order 100 cm${}^{-3}$ can be produced.
However, even this large number of polarized neutrons, trapped in two three-liter cells of superfluid helium (with opposing electric fields), is neither sufficient for direct detection using SQUID or atomic magnetometers, nor is it amenable to the Ramsey technique. Thus, a second major point of departure for this experiment is the introduction of polarized ${}^{3}$He (with a magnetic moment about 10% larger than that of the neutron) into the superfluid to act as the ‘detector’ [19]. In this case, we produce polarized ${}^{3}$He atoms using an atomic beam source (ABS). The neutron-capture reaction on ${}^{3}$He
{eqnletter}
n + ${}^{3}$He →p + t + 764 keV
is highly spin dependent, with a cross section of megabarns when the spins are anti-parallel and effectively zero with spins parallel. Therefore, by applying a single $\pi/2$ pulse, flipping both the neutron and ${}^{3}$He spins, the difference in their precession rates can be measured by observing the scintillation light produced in the liquid helium by the capture products. Because of the large cross section, the optimal density of ${}^{3}$He is low, about $10^{12}$ cm${}^{-3}$ corresponding to a ${}^{3}$He/${}^{4}$He fraction of about $5\times 10^{-11}$, or about four orders of magnitude below the natural concentration. Even at this low density, however, we can use the overall precession rate of the polarized ${}^{3}$He, measured by SQUID, to determine the magnetic field. Aside from a small gravitational offset (because of the lower neutron temperature and larger ${}^{3}$He mass), the neutrons and ${}^{3}$He atoms sample exactly the same space inside the superfluid-filled cells, thus an effective co-magnetometer is built-in. To maximize the effectiveness of the co-magnetometer, a spin-dressing technique has been developed and experimentally validated [20, 21], in which the gyromagnetic ratios of the neutrons and ${}^{3}$He are rendered effectively the same by applying off-resonant radio-frequency fields.
Maximum sensitivity results when measuring for approximately the neutron lifetime, with the capture lifetime tuned (by adjusting the ${}^{3}$He density) to about the same value. Because the ${}^{3}$He will eventually be depolarized by wall collisions and field gradients, we must have a technique for removing it from the system and supplying a new charge of highly polarized ${}^{3}$He. For this purpose, we will again use the phonons in the superfluid, this time produced by a heater. These phonons scatter the ${}^{3}$He toward the cold end of the region (see Refs. [22, 23] and references therein) where the heater has produced a temperature gradient. After dumping the volume containing the concentrated, somewhat depolarized ${}^{3}$He, refilling it with pure ${}^{4}$He superfluid, a new set of highly polarized ${}^{3}$He atoms is injected into the experiment from the ABS.
With a design value for the electric field of 50 kV/cm and a 300-day live-time, this experiment is expected to reach the level of about $8\times 10^{-28}$ e$\cdot$cm. The largest systematic uncertainties are expected to be from the geometric-phase effect ($\sim 2\times 10^{-28}$ e$\cdot$cm, limited by
the uniformity of the $B_{0}$ holding field), with contributions from an effective magnetic field coming from the neutron scattering from polarized ${}^{3}$He, as well as from leakage currents at the $\sim 1\times 10^{-28}$ e$\cdot$cm level.
3.2 Field monitoring
In order to control systematic uncertainties due to the motional magnetic field, it is essential to maintain stable, homogeneous electric field over the measurement cycle, as well as apply accurate electric-field reversal in order to reduce systematic effects quadratic in electric field. An accurate monitoring of electric field is necessary to ensure this has been achieved.
The Kerr effect in superfluid helium, already present in the experiment, provides a useful non-contact method of monitoring the electric field, especially given the harsh environment of the SNS nEDM experiment. The applied electric field causes the helium medium to become birefringent, which is then detected by laser polarimetry. The Kerr constant of superfluid helium has been measured to demonstrate the feasibility of the technique [24], and in order to minimize effect of spurious birefringence from optical windows, we have developed a double-pass cancellation scheme and demonstrated the scheme in a proof-of-concept setup [25]. We project sensitivity of $\delta E/E\approx 1\%$.
Non-linear magneto-optical rotation (NMOR) magnetometers can be used to monitor the magnetic field and ensure that the geometric phase effect from the magnetic field inhomogeneity is below the level specified above [26].
4 Conclusion
The EDM of elementary particles continue to provide important constraints on physics beyond the standard model. Because they are
sensitive to CP-violating couplings, they are complementary to the direct searches at the LHC. Using an array of techniques, some new, some old, a considerable number of experiments are either underway or being developed to improve the limits on the EDM of the electron, of nuclei, and of the neutron. The neutron EDM experiment being developed for the Oak Ridge Spallation neutron source, using UCN production-in-place in superfluid ${}^{4}$He, has a goal of reaching into the $10^{-28}$ e$\cdot$cm regime with a roughly one year measurement time.
References
[1]
\BYC. S. Wu, et al. \INPhys. Rev.10519571413.
[2]
\BYJ. H. Smith, E. M Purcell and N. F. Ramsey \INPhys.Rev.1081957120
[3]
\BYT. D. Lee and C. N. Yang \INPhys. Rev.1041956104.
[4]
\BYI. B. Khriplovich and A. R. Zhitnitsky \INSov. J. Nucl. Phys.34198195.
[5]
\BYA. D. Sakharov \INPisma Zh. Eksp. Teor. Fiz.5196732.
[6]
\BYS. Dar hep-ph/0008248.
[7]
\BYV. Cirigliano, S. Profumo and M. J. Ramsey-Musolf \INJHEP06072006002
[8]
\BYP. G. H. Sandars \INPhys. Lett.221966290.
[9]
\BYN. F. Ramsey \INPhys. Rev.781950695.
[10]
\BYB. C. Regan, et al. \INPhys. Rev. Lett.882002071805.
[11]
\BYJ. J. Hudson, et al. \INNature4732011493.
[12]
\BYW. C. Griffith, et al. \INPhys. Rev. Lett.1022009101601.
[13]
\BYYa. B. Zel’dovich \INSov. Phys. JETP919591389.
[14]
\BYR. Golub, D. J. Richardson, and S. K. Lamoreaux \TITLEUltra-Cold Neutrons, Adam Hilger, Bristol, 1991.
[15]
\BYC. A. Baker, et al. \INPhys. Rev. Lett.972006131801.
[16]
\BYJ. M. Pendlebury, et al. \INPhys. Rev. A702004032102.
[17]
\BYP. G. Harris and J. M. Pendlebury \INPhys. Rev. A732006014101.
[18]
\BYR. Golub and J. M. Pendlebury \INPhys. Lett.A621977337.
[19]
\BYR. Golub and S. K. Lamoreaux \INPhys. Rep.23719941.
[20]
\BYA. Esler, et al. \INPhys. Rev. C762007051302.
[21]
\BYP. H. Chu, et al. \INPhys. Rev. C842011022501.
[22]
\BYM.E. Hayden, S.K. Lamoreaux, and R. Golub \INAIP Conf. Proc.8502006147.
[23]
\BYG. Baym and C. Ebner \INPhys. Rev.1641967235.
[24]
\BYA. O. Sushkov, et al. \INPhys. Rev. Lett.932004153003.
[25]
\BYB. K. Park, A. O. Sushkov, and D. Budker \INRev. Sci. Instr.792008013108.
[26]
\BYC. Hovde, et al. \INProc. SPIE76932010769313. |
A Non-Binary Associative Memory with Exponential Pattern Retrieval Capacity and Iterative Learning
Amir Hesam Salavati${}^{\dagger}$, K. Raj Kumar${}^{\ddagger}$, and Amin Shokrollahi${}^{\dagger}$
$\dagger$:Laboratoire d’algorithmique (ALGO)
Ecole Polytechnique Fédérale de Lausanne (EPFL), 1015
Lausanne, Switzerland
E-mail:
{hesam.salavati,amin.shokrollahi}@epfl.ch
$\ddagger$:Qualcomm Research India
Bangalore - 560066, India
E-mail:
[email protected]
Abstract
We consider the problem of neural association for a network of
non-binary neurons. Here, the task is to first memorize a set of patterns using a network of neurons whose states
assume values from a finite number of integer levels. Later, the same network should be able to recall previously memorized patterns from their noisy versions. Prior work in this area
consider storing a finite number of
purely random patterns, and have shown that the pattern
retrieval capacities (maximum number of patterns that can be
memorized) scale only linearly with the number of neurons in the
network.
In our formulation of the problem, we concentrate on exploiting redundancy and internal structure of the patterns in order to improve the pattern retrieval capacity. Our first result shows that if the given patterns have a suitable linear-algebraic structure, i.e. comprise a sub-space of the set of all possible patterns, then the pattern retrieval capacity is in fact exponential in terms of the number of neurons. The second result extends the previous finding to cases where the patterns have weak minor components, i.e.
the smallest eigenvalues of the correlation matrix tend toward zero. We will use these minor components (or the basis vectors of the pattern null space) to both increase the pattern
retrieval capacity and error correction capabilities.
An iterative algorithm is proposed for the learning phase, and two simple neural update algorithms are presented for the recall phase. Using analytical results and simulations,
we show that the proposed methods can tolerate a fair amount of errors in the input while being able to memorize an exponentially large number of patterns.
Neural associative memory, Error correcting codes, message passing, stochastic learning, dual-space method
I Introduction
Neural associative memory is a particular class of neural networks capable of memorizing (learning) a set of patterns and recalling them later in presence of noise, i.e. retrieve the
correct memorized pattern from a given noisy version. Starting from the seminal work of Hopfield in 1982 [1], various artificial neural networks have been designed to mimic the
task of the neuronal associative memory (see for instance [2], [3], [4], [5], [6]).
In essence, the neural associative memory problem is very similar to the one faced in communication systems where the goal is to reliably and efficiently retrieve a set of patterns
(so called codewords) form noisy versions. More interestingly, the
techniques used to implement an artificial neural associative memory looks very similar to some of the methods used in graph-based modern codes to decode information. This makes the
pattern retrieval phase in neural associative memories very similar to iterative decoding techniques in modern coding theory.
However, despite the similarity in the task and techniques employed in both problems, there is a huge gap in terms of efficiency. Using binary codewords of length $n$, one can construct codes that are
capable of reliably transmitting $2^{rn}$ codewords over a noisy channel, where $0<r<1$ is the code rate [7]. The optimal $r$ (i.e. the
largest possible value that permits the almost sure recovery of transmitted codewords from the corrupted received versions) depends on the noise characteristics of the channel and is known as the Shannon capacity [8]. In fact, the Shannon capacity is achievable in certain cases, for example by LDPC codes over AWGN channels.
In current neural associative memories, however, with a network of size $n$ one can only memorize $O(n)$ binary patterns of length $n$ [9], [2]. To be fair, it must
be mentioned that these networks are designed such that they are able to memorize any possible set of randomly chosen patterns (with size $O(n)$ of course) (e.g., [1],
[2], [3], [4]). Therefore, although humans cannot memorize random patterns, these methods provide artificial neural associative memories with a
pleasant sense of generality.
However, this generality severely restricts the efficiency of the network since even if the input patterns have some internal redundancy or structure, current neural associative memories
could not exploit this redundancy in order to increase the number of memorizable patterns or improve error correction during the recall phase. In fact, concentrating on redundancies within
patterns is a fairly new viewpoint. This point of view is in harmony to coding techniques where one designs codewords with certain degree of redundancy and then use this redundancy to
correct corrupted signals at the receiver’s side.
In this paper, we focus on bridging the performance gap between the coding techniques and neural associative memories. Our proposed neural network exploits the inherent structure of the input
patterns in order to increase the pattern retrieval capacity from $O(n)$ to $O(a^{n})$ with $a>1$. More specifically, the proposed neural network is capable of
learning and reliably recalling given patterns when they come from a subspace with dimension $k<n$ of all possible $n$-dimensional patterns. Note that although the proposed model does not
have the versatility of traditional associative memories to handle any set of inputs, such as the Hopfield network [1], it enables us to boost the capacity by a great extent in
cases where there is some input redundancy. In contrast, traditional associative memories will still have linear pattern retrieval capacity even if the patterns good linear algebraic structures.
In [10], we presented some preliminary results in which two efficient recall algorithms were proposed for the case where the neural graph had the structure of an expander
[11]. Here, we extend the previous results to general sparse neural graphs as well as proposing a simple learning algorithm to capture the internal structure of the patterns (which will be used later in the recall phase).
The remainder of this paper is organized as follows: In Section II, we will discuss the neural model used in this paper and formally define the associative memory problem. We explain
the proposed learning algorithm in Section III. Sections IV and V are respectively dedicated to the recall algorithm and analytically investigating its performance in
retrieving corrupted patterns. In Section VI we address the pattern retrieval capacity and show that it is exponential in $n$. Simulation results are discussed in Section
VII. Section VIII concludes the paper and discusses future research topics. Finally, the Appendices contain some extra remarks as well as the proofs for certain lemmas and theorems.
II Problem Formulation and the Neural Model
II-A The Model
In the proposed model, we work with neurons whose states are integers from a finite set of non-negative values $\mathcal{Q}=\{0,1,\dots,Q-1\}$. A natural way of interpreting this model is to
think of the integer states as the short-term firing rate of neurons (possibly quantized). In other words, the state of a neuron in this model indicates the number of spikes fired by the neuron in a fixed short
time interval.
Like in other neural networks, neurons can only perform simple operations. We consider neurons that can do linear summation over the input and possibly apply a non-linear function
(such as thresholding) to produce the output. More specifically, neuron $x$ updates its state based on the states of
its neighbors $\{s_{i}\}_{i=1}^{n}$ as follows:
1.
It computes the weighted sum
$h=\sum_{i=1}^{n}w_{i}s_{i},$
where $w_{i}$ denotes the weight of the input link from the $i^{th}$ neighbor.
2.
It updates its state as $x=f(h),$
where $f:\mathbb{R}\rightarrow\mathcal{Q}$ is a possibly non-linear function
from the field of real numbers $\mathbb{R}$ to $\mathcal{Q}$.
We will refer to these two as ”neural operations” in the sequel.
II-B The Problem
The neural associative memory problem consists of two parts: learning and pattern retrieval.
II-B1 The learning phase
We assume to be given $C$ vectors of length $n$ with integer-valued entries belonging to $\mathcal{Q}$. Furthermore, we assume these patterns belong to a subspace of $\mathcal{Q}^{n}$ with dimension $k\leq n$. Let $\mathcal{X}_{C\times n}$ be the matrix that contains the set of patterns in its rows. Note that if $k=n$, then we are back to the original associative memory problem. However, our focus will beon the case where $k<n$, which will be shown to yield much larger pattern retrieval capacities. Let us denote the model specification by a triplet $(\mathcal{Q},n,k)$.
The learning phase then comprises a set of steps to determine the connectivity of the neural graph (i.e. finding a set of weights) as a function of the training patterns in $\mathcal{X}$ such that these patterns are stable states of the recall process. More specifically, in the learning phase we would like to memorize the patterns in $\mathcal{X}$ by finding a set of non-zero vectors $w_{1},\dots,w_{m}\in\mathbb{R}^{n}$ that are orthogonal to the set of given patterns. Remark here that such vectors exist (for instance the basis of the null-space).
Our interest is to come up with a neural scheme to determine these vectors. Therefore, the inherent structure of the patterns are captured in the obtained null-space vectors, denoted by the matrix $W\in\mathbb{R}^{m\times n}$, whose $i^{\mbox{th}}$ row is $w_{i}$. This matrix can be interpreted as the adjacency matrix of a bipartite graph which represents our neural network. The graph is comprised on pattern and constraint neurons (nodes). Pattern neurons, as they name suggest, correspond to the states of the patterns we would like to learn or recall. The constrain neurons, on the other hand, should verify if the current pattern belongs to the database $\mathcal{X}$. If not, they should send proper feedback messages to the pattern neurons in order to help them converge to the correct pattern in the dataset. The overall network model is shown in Figure 1.
II-B2 The recall phase
In the recall phase, the neural network should retrieve the correct memorized pattern from a possibly corrupted version. In this case, the states of the pattern neurons $x_{1},x_{2},\dots,x_{n}$
are initialized with the given (noisy) input pattern. Here, we assume that the noise is integer valued and additive111It must be mentioned that neural states below $0$ and above
$Q-1$ will be clipped to $0$ and $Q-1$, respectively. This is biologically justified as the firing rate of neurons can not exceed an upper bound and of course can not be less than zero.. Therefore, assuming the input to the network is a corrupted version of pattern $x^{\mu}$, the state of the pattern nodes are $x=x^{\mu}+z$, where $z$ is the noise. Now the neural network should use the given states together with the fact that $Wx^{\mu}=0$ to retrieve pattern $x^{\mu}$, i.e. it should estimate $z$ from $Wx=Wz$ and return $x^{\mu}=x-z$. Any algorithm designed for this purpose should be simple enough to be implemented by neurons. Therefore, our objective is to find a simple algorithm capable of
eliminating noise using only neural operations.
II-C Related Works
Designing a neural associative memory has been an active area of research for the past three decades. Hopfield was the first to design an artificial neural associative memory in his seminal work in
1982 [1]. The so-called Hopfield network is inspired by Hebbian learning [12] and is composed of binary-valued ($\pm 1$) neurons, which together are able to memorize a
certain number of patterns. In our terminology, the Hopfield network corresponds to a $(\{-1,1\},n,n)$ neural model. The pattern retrieval capacity of a Hopfield network of $n$ neurons was
derived later by Amit et al. [13] and shown to be $0.13n$, under vanishing bit error probability requirement. Later, McEliece et al. [9] proved that under the requirement
of vanishing pattern error probability, the capacity of Hopfield networks is $n/(2\log(n)))=O(n/\log(n))$.
In addition to neural networks with online learning capability, offline methods have also been used to design neural associative memories. For instance, in [2] the authors
assume the complete set of pattern is given in advance and calculate the weight matrix using the pseudo-inverse rule [14] offline. In return, this approach helps them improve the
capacity of a Hopfield network to $n/2$, under vanishing pattern error probability condition, while being able to correct one bit of error in the recall phase. Although this is a
significant improvement to the $n/\log(n)$ scaling of the pattern retrieval capacity in [9], it comes at the price of much higher computational complexity and the lack of
gradual learning ability.
While the connectivity graph of a Hopfield network is a complete graph, Komlos and Paturi [15] extended the work of McEliece to sparse neural graphs. Their results are of
particular interest as physiological data is also in favor of sparsely interconnected neural networks. They have considered a network in which each neuron is connected to $d$ other neurons,
i.e., a $d$-regular network. Assuming that the network graph satisfies certain connectivity measures, they prove that it is possible to store a linear number of random patterns (in
terms of $d$) with vanishing bit error probability or $C=O(d/\log n)$ random patterns with vanishing pattern error probability. Furthermore, they show that in spite of the capacity
reduction, the error correction capability remains the same as the network can still tolerate a number of errors which is linear in $n$.
It is also known that the capacity of neural associative memories could be enhanced if the patterns are of low-activity nature, in the sense that at any time instant many of the
neurons are silent [14]. However, even these schemes fail when required to correct a fair amount of erroneous bits as the information retrieval is not better compared to that of
normal networks.
Extension of associative memories to non-binary neural models has also been explored in the past. Hopfield addressed the case of continuous neurons and showed that similar to the binary
case, neurons with states between $-1$ and $1$ can memorize a set of random patterns, albeit with less capacity [16]. Prados and Kak considered a digital version of non-binary neural networks in which neural states could assume integer (positive and negative) values [17]. They show that the storage capacity of such networks are in general larger than their binary peers. However, the capacity would still be less than $n$ in the sense that the proposed neural network can not have more than $n$ patterns that are stable states of the network, let alone being able to retrieve the correct pattern from corrupted input queries.
In [3] the authors investigated a multi-state complex-valued neural associative memory for which the estimated capacity is $C<0.15n$. Under the same model but using a different learning method, Muezzinoglu et al.
[4] showed that the capacity can be increased to $C=n$. However the complexity of the weight computation mechanism is prohibitive. To overcome this drawback, a Modified
Gradient Descent learning Rule (MGDR) was devised in [18]. In our terminology, all these models are $(\{e^{2\pi js/k}|0\leq s\leq k-1\},n,n)$ neural associative memories.
Given that even very complex offline learning methods can not improve the capacity of binary or multi-sate neural associative memories, a group of recent works has made considerable efforts
to exploit the inherent structure of the patterns in order to increase capacity and improve error correction capabilities. Such methods focus merely on memorizing those patterns that have
some sort of inherent redundancy. As a result, they differ from previous methods in which the network was deigned to be able to memorize any random set of patterns. Pioneering this
approach, Berrou and Gripon [19] achieved considerable improvements in the pattern retrieval capacity of Hopfield networks, by utilizing Walsh-Hadamard sequences.
Walsh-Hadamard sequences are a particular type of low correlation sequences and were initially used in CDMA communications to overcome the effect of noise. The only slight downside to the
proposed method is the use of a decoder based on the winner-take-all approach which requires a separate neural stage, increasing the complexity of the overall method. Using low correlation
sequences has also been considered in [5], where the authors introduced two novel mechanisms of neural association that employ binary neurons to memorize patterns belonging to
another type of low correlation sequences, called Gold family [20]. The network itself is very similar to that of Hopfield, with a slightly modified weighting rule. Therefore,
similar to a Hopfield network, the complexity of the learning phase is small. However, the authors failed to increase the pattern retrieval capacity beyond $n$ and it was shown that the
pattern retrieval capacity of the proposed model is $C=n$, while being able to correct a fair number of erroneous input bits.
Later, Gripon and Berrou came up with a different approach based on neural cliques, which increased the pattern retrieval capacity to $O(n^{2})$ [6]. Their method is based on
dividing a neural network of size $n$ into $c$ clusters of size $n/c$ each. Then, the messages are chosen such that only one neuron in each cluster is active for a given message. Therefore,
one can think of messages as a random vector of length $c\log(n/c)$, where the $\log(n/c)$ part specifies the index of the active neuron in a given cluster. The authors also provide a
learning algorithm, similar to that of Hopfield, to learn the pair-wise correlations within the patterns. Using this technique and exploiting the fact that the resulting patterns are very
sparse, they could boost the capacity to $O(n^{2})$ while maintaining the computational simplicity of Hopfield networks.
In contrast to the pairwise correlation of the Hopfield model, Peretto et al. [21] deployed higher order neural models: the models in which the state of the
neurons not only depends on the state of their neighbors, but also on the correlation among them. Under this model, they showed that the storage capacity of a higher-order Hopfield network
can be improved to $C=O(n^{p-2})$, where $p$ is the degree of correlation considered. The main drawback of this model is the huge computational complexity required in the learning
phase, as one has to keep track of $O(n^{p-2})$ neural links and their weights during the learning period.
Recently, the present authors introduced a novel model inspired by modern coding techniques in which a neural bipartite graph is used to memorize the patterns that belong to a subspace
[10]. The proposed model can be also thought of as a way to capture higher order correlations in given patterns while keeping the computational complexity to a minimal level (since
instead of $O(n^{p-2})$ weights one needs to only keep track of $O(n^{2})$ of them). Under the assumptions that the bipartite graph is known, sparse, and expander, the proposed algorithm
increased the pattern retrieval capacity to $C=O(a^{n})$, for some $a>1$, closing the gap between the pattern retrieval capacities achieved in neural networks and that of coding techniques. For completeness, this approach is presented in the appendix (along with the detailed proofs). The main drawbacks in the proposed approach were the lack of a learning algorithm as well as the expansion assumption on the neural graph.
In this paper, we focus on extending the results described in [10] in several directions: first, we will suggest an iterative learning algorithm, to find the neural connectivity
matrix from the patterns in the training set. Secondly, we provide an analysis of the proposed error correcting algorithm in the recall phase and investigate its performance as a function
of input noise and network model. Finally, we discuss some variants of the error correcting method which achieve better performance in practice.
It is worth mentioning that an extension of this approach to a multi-level neural network is considered in [22]. There, the novel structure enables better error correction.
However, the learning algorithm lacks the ability to learn the patterns one by one and requires the patterns to be presented all at the same time in the form of a big matrix. In [23] we have further extended this approach to a modular single-layer architecture with online learning capabilities. The modular structure makes the recall algorithm much more efficient while the online learning enables the network to learn gradually from examples. The learning algorithm proposed in this paper is also virtually the same as the one we proposed in [23], giving it the advantage of
Another important point to note is that learning linear constraints by a neural network is hardly a new topic as one can learn a matrix orthogonal to a set of patterns in the training set
(i.e., $Wx^{\mu}=0$) using simple neural learning rules (we refer the interested readers to [24] and [25]). However, to the best of our knowledge, finding such a matrix subject
to the sparsity constraints has not been investigated before. This problem can also be regarded as an instance of compressed sensing [26], in which the measurement matrix is given
by the big patterns matrix $\mathcal{X}_{C\times n}$ and the set of measurements are the constraints we look to satisfy, denoted by the tall vector $b$, which for simplicity reasons we assume
to be all zero. Thus, we are interested in finding a sparse vector $w$ such that $\mathcal{X}w=0$. Nevertheless, many decoders proposed in this area are very complicated and cannot be
implemented by a neural network using simple neuron operations. Some exceptions are [27] and [28] which are closely related to the learning algorithm proposed in this
paper.
II-D Solution Overview
Before going through the details of the algorithms, let us give an overview of the proposed solution. To learn the set of given patterns, we have adopted the neural learning algorithm
proposed in [29] and modified it to favor sparse solutions. In each iteration of the algorithm, a random pattern from the data set is picked and the neural weights corresponding
to constraint neurons are adjusted is such a way that the projection of the pattern along the current weight vectors is reduced, while trying to make the weights sparse as well.
In the recall phase, we exploit the fact that the learned neural graph is sparse and orthogonal to the set of patterns. Therefore, when a query is given, if it is not orthogonal to the connectivity
matrix of the weighted neural graph, it is noisy. We will use the sparsity of the neural graph to eliminate this noise using a simple iterative algorithm. In each iteration, there is a set
of violated constraint neurons, i.e. those that receive a non-zero sum over their input links. These nodes will send feedback to their corresponding neighbors among the pattern neurons,
where the feedback is the sign of the received input-sum. At this point, the pattern nodes that receive feedback from a majority of their neighbors update their state according to the sign
of the sum of received messages. This process continues until noise is eliminated completely or a failure is declared.
In short, we propose a neural network with online learning capabilities which uses only neural operations to memorize an exponential number of patterns.
III Learning Phase
Since the patterns are assumed to be coming from a subspace in the $n$-dimensional space, we adapt the algorithm proposed by Oja and Karhunen [29] to learn the null-space basis
of the subspace defined by the patterns. In fact, a very similar algorithm is also used in [24] for the same purpose. However, since we need the basis vectors to be sparse (due to
requirements of the algorithm used in the recall phase), we add an additional term to penalize non-sparse solutions during the learning phase.
Another difference with the proposed method and that of [24] is that the learning algorithm proposed in [24] yields dual vectors that form an orthogonal set. Although one can
easily extend our suggested method to such a case as well, we find this requirement unnecessary in our case. This gives us the additional advantage to make the algorithm parallel and
adaptive. Parallel in the sense that we can design an algorithm to learn one constraint and repeat it several times in order to find all constraints with high probability. And
adaptive in the sense that we can determine the number of constraints on-the-go, i.e. start by learning just a few constraints. If needed (for instance due to bad performance in the recall
phase), the network can easily learn additional constraints. This increases the flexibility of the algorithm and provides a nice trade-off between the time spent on learning and the
performance in the recall phase. Both these points make an approach biologically realistic.
It should be mentioned that the core of our learning algorithm here is virtually the same as the one we proposed in [23].
III-A Overview of the proposed algorithm
The problem to find one sparse constraint vector $w$ is given by equations (1), (2), in which pattern $\mu$ is denoted by $x^{\mu}$.
{subequations}
$$\min\sum_{\mu=1}^{C}s|x^{\mu}\cdot w|^{2}+\eta g(w).$$
(1)
subject to:
$$\|w\|_{2}=1$$
(2)
In the above problem, $\cdot$ is the inner-product, $\|.\|_{2}$ represent the $\ell_{2}$ vector norm, $g(w)$ a penalty function to encourage sparsity and $\eta$ is a positive
constant. There are various ways to choose $g(w)$. For instance one can pick $g(w)$ to be $\|.\|_{1}$, which leads to $\ell_{1}$-norm penalty and is widely used in compressed sensing
applications [27], [28]. Here, we will use a different penalty function, as explained later.
To form the basis for the null space of the patterns, we need $m=n-k$ vectors, which we can obtain by solving the above problem several times, each time from a random initial
point222It must be mentioned that in order to have exactly $m=n-k$ linearly independent vectors, we should pay some additional attention when repeating the proposed method several
time. This issue is addressed later in the paper..
As for the sparsity penalty term $g(w)$ in this problem, in this paper we consider the function
$$g(w)=\sum_{i=1}^{n}\tanh(\sigma w_{i}^{2}),$$
where $\sigma$ is chosen appropriately. Intuitively, $\tanh(\sigma w_{i}^{2})$ approximates $|\hbox{sign}(w_{i})|$ in $\ell_{0}$-norm. Therefore, the larger $\sigma$ is, the closer $g(w)$ will be
to $\|.\|_{0}$. By calculating the derivative of the objective function, and by considering the update due to each randomly picked pattern $x$, we will get the following iterative
algorithm:
{subequations}
$$y(t)=x(t)\cdot w(t)$$
(3)
$$\tilde{w}(t+1)=w(t)-\alpha_{t}\left(2y(t)x(t)+\eta\Gamma(w(t))\right)$$
(4)
$$w(t+1)=\frac{\tilde{w}(t+1)}{\|\tilde{w}(t+1)\|_{2}}$$
(5)
In the above equations, $t$ is the iteration number, $x(t)$ is the sample pattern chosen at iteration $t$ uniformly at random from the patterns in the training set $\mathcal{X}$, and $\alpha_{t}$
is a small positive constant. Finally, $\Gamma(w):\mathcal{R}^{n}\rightarrow\mathcal{R}^{n}=\nabla g(w)$ is the gradient of the penalty term for non-sparse solutions. This function has the
interesting property that for very small values of $w_{i}(t)$, $\Gamma(w_{i}(t))\simeq 2\sigma w_{i}(t)$. To see why, consider the $i^{th}$ entry of the function $\Gamma(w(t)))$
$$\Gamma_{i}(w(t))=\partial g(w(t))/\partial w_{i}(t)=2\sigma_{t}w_{i}(t)(1-%
\tanh^{2}(\sigma w_{i}(t)^{2}))$$
It is easy to see that $\Gamma_{i}(w(t))\simeq 2\sigma w_{i}(t)$ for relatively small $w_{i}(t)$’s. And for larger values of $w_{i}(t)$, we get $\Gamma_{i}(w(t))\simeq 0$ (see Figure
2). Therefore, by proper choice of $\eta$ and $\sigma$, equation (4) suppresses small entries of $w(t)$ by pushing them towards zero,
thus, favoring sparser results. To simplify the analysis, with some abuse of notation, we approximate the function $\Gamma(w^{(\ell)}(t))$ with the following function:
$$\Gamma_{i}(w^{(\ell)}(t))=\left\{\begin{array}[]{ll}w_{i}^{(\ell)}(t)&\mbox{if%
$|w_{i}^{(\ell)}(t)|\leq\theta_{t}$};\\
0&\mbox{otherwise},\end{array}\right.$$
(6)
where $\theta_{t}$ is a small positive threshold.
Following the same approach as [29] and assuming $\alpha_{t}$ to be small enough such that equation (5) can be expanded as powers of $\alpha_{t}$, we can approximate
equation (III-A) with the following simpler version:
{subequations}
$$y(t)=x(t)\cdot w(t)$$
(7)
$$w(t+1)=w(t)-\alpha_{t}\left(y(t)\left(x(t)-\frac{y(t)w(t)}{\|w(t)\|_{2}^{2}}%
\right)+\eta\Gamma(w(t))\right)$$
(8)
In the above approximation, we also omitted the term $\alpha_{t}\eta\left(w(t)\cdot\Gamma(w(t))\right)w(t)$ since $w(t)\cdot\Gamma(w(t))$ would be negligible, specially as $\theta_{t}$ in equation (6) becomes smaller.
The overall learning algorithm for one constraint node is given by Algorithm 1. In words, in Algorithm 1 $y(t)$ is the projection of $x(t)$ on the basis
vector $w(t)$. If for a given data vector $x(t)$, $y(t)$ is equal to zero, namely, the data is orthogonal to the current weight vector $w(t)$, then according to equation
(8) the weight vector will not be updated. However, if the data vector $x(t)$ has some projection over $w(t)$ then the weight vector is updated towards the direction
to reduce this projection.
Since we are interested in finding $m$ basis vectors, we have to do the above procedure at least $m$ times in parallel.333In practice, we may have to repeat this process more
than $m$ times to ensure the existence of a set of $m$ linearly independent vectors. However, our experimental results suggest that most of the time, repeating $m$ times would be
sufficient.
Remark 1.
Although we are interested in finding a sparse graph, note that too much sparseness is not desired. This is because we are going to use the feedback sent by the constraint nodes to eliminate input
noise at pattern nodes during the recall phase. Now if the graph is too sparse, the number of feedback messages received by each pattern node is too small to be relied upon. Therefore, we must adjust the penalty
coefficient $\eta$ such that resulting neural graph is sufficiently sparse. In the section on experimental results, we compare the error correction performance for different choices of
$\eta$.
III-B Convergence analysis
In order to prove that Algorithm 1 converges to the proper solution, we use results from statistical learning. More specifically, we benefit from the convergence of
Stochastic Gradient Descent (SGD) algorithms [30]. To prove the convergence, let $E(w)=\sum_{\mu}|x^{\mu}\cdot w|^{2}$ be the cost function we would like to minimize.
Furthermore, let $A=\mathbb{E}\{xx^{T}|x\in\mathcal{X}\}$ be the corelation matrix for the patterns in the training set. Therefore, due to uniformity assumption for the patterns in the training
set, one can rewrite $E(w)=w^{T}Aw$. Finally, denote $A_{\mu}=x^{\mu}(x^{\mu})^{T}$. Now consider the following assumptions:
A1.
$\|A\|_{2}\leq\Upsilon<\infty$ and $\sup_{\mu}\|A_{\mu}\|_{2}=\|x^{\mu}\|^{2}\leq\zeta<\infty$.
A2.
$\alpha_{t}>0$, $\sum\alpha_{t}\rightarrow\infty$ and $\sum\alpha_{t}^{2}<\infty$, where $\alpha_{t}$ is the small learning rate defined in III-A.
The following lemma proves the convergence of Algorithm 1 to a local minimum $w^{*}$.
Lemma 1.
Let assumptions A1 and A2 hold. Then, Algorithm 1 converges to a local minimum $w^{*}$ for which $\nabla E(w^{*})=0$.
Proof.
To prove the lemma, we use the convergence results in [30] and show that the required assumptions to ensure convergence holds for the proposed algorithm. For simplicity, these
assumptions are listed here:
1.
The cost function $E(w)$ is three-times differentiable with continuous derivatives. It is also bounded from below.
2.
The usual conditions on the learning rates are fulfilled, i.e. $\sum\alpha_{t}=\infty$ and $\sum\alpha_{t}^{2}<\infty$.
3.
The second moment of the update term should not grow more than linearly with size of the weight vector. In other words,
$$E(w)\leq a+b\|w\|_{2}^{2}$$
for some constants $a$ and $b$.
4.
When the norm of the weight vector $w$ is larger than a certain horizon $D$, the opposite of the gradient $-\nabla E(W)$ points towards the origin. Or in other words:
$$\inf{\|w\|_{2}>D}w\cdot\nabla E(w)>0$$
5.
When the norm of the weight vector is smaller than a second horizon $F$, with $F>D$, then the norm of the update term $\left(2y(t)x(t)+\eta\Gamma(w(t))\right)$ is bounded
regardless of $x(t)$. This is usually a mild requirement:
$$\forall x(t)\in\mathcal{X},\ \sup_{\|w\|_{2}\leq F}\|\left(2y(t)x(t)+\eta%
\Gamma(w(t))\right)\|_{2}\leq K_{0}$$
To start, assumption $1$ holds trivially as the cost function is three-times differentiable, with continuous derivatives. Furthermore, $E(w)\geq 0$. Assumption $2$ holds because of our
choice of the step size $\alpha_{t}$, as mentioned in the lemma description.
Assumption $3$ ensures that the vector $w$ could not escape by becoming larger and larger. Due to the constraint $\|w\|_{2}=1$, this assumption holds as well.
Assumption $4$ holds as well because:
$$\displaystyle\mathbb{E}_{\mu}\left(2A_{\mu}w+\eta\Gamma(w)\right)^{2}$$
$$\displaystyle=$$
$$\displaystyle 4w^{T}\mathbb{E}_{\mu}(A_{\mu}^{2})w+\eta^{2}\|\Gamma(w)\|_{2}^{2}$$
(9)
$$\displaystyle+$$
$$\displaystyle 4\eta w^{T}\mathbb{E}_{\mu}(A_{\mu})\Gamma(w)$$
$$\displaystyle\leq$$
$$\displaystyle 4\|w\|_{2}^{2}\zeta^{2}+\eta^{2}\|w\|_{2}^{2}+4\eta\Upsilon\|w\|%
_{2}^{2}$$
$$\displaystyle=$$
$$\displaystyle\|w\|_{2}^{2}(4\zeta^{2}+4\eta\Upsilon+\eta^{2})$$
Finally, assumption $5$ holds because:
$$\displaystyle\|2A_{\mu}w+\eta\Gamma(w)\|_{2}^{2}$$
$$\displaystyle=$$
$$\displaystyle 4w^{T}A_{\mu}^{2}w+\eta^{2}\|\Gamma(w)\|_{2}^{2}$$
(10)
$$\displaystyle+$$
$$\displaystyle 4\eta w^{T}A_{\mu}\Gamma(w)$$
$$\displaystyle\leq$$
$$\displaystyle\|w\|_{2}^{2}(4\zeta^{2}+4\eta\zeta+\eta^{2})$$
Therefore, $\exists F>D$ such that as long as $\|w\|_{2}^{2}<F$:
$$\sup_{\|w\|_{2}^{2}<E}\|2A_{\mu}w+\eta\Gamma(w)\|_{2}^{2}\leq(2\zeta+\eta)^{2}%
F=\hbox{constant}$$
(11)
Since all necessary assumptions hold for the learning algorithm 1, it converges to a local minimum where $\nabla E(w^{*})=0$.
∎
Next, we prove the desired result, i.e. the fact that at the local minimum, the resulting weight vector is orthogonal to the patterns, i.e. $Aw=0$.
Theorem 2.
In the local minimum where $\nabla E(w^{*})=0$, the optimal vector $w^{*}$ is orthogonal to the patterns in the training set.
Proof.
Since $\nabla E(w^{*})=2Aw^{*}+\eta\Gamma(w^{*})=0$, we have:
$$w^{*}\cdot\nabla E(w^{*})=2(w^{*})^{T}Aw^{*}+\eta w^{*}\cdot\Gamma(w^{*})$$
(12)
The first term is always greater than or equal to zero. Now as for the second term, we have that $|\Gamma(w_{i})|\leq|w_{i}|$ and $\hbox{sign}(w_{i})=\hbox{sign}(\Gamma(w_{i}))$, where $w_{i}$ is
the $i^{th}$ entry of $w$. Therefore, $0\leq w^{*}\cdot\Gamma(w^{*})\leq\|w^{*}\|_{2}^{2}$. Therefore, both terms on the right hand side of (12) are greater than or
equal to zero. And since the left hand side is known to be equal to zero, we conclude that $(w^{*})^{T}Aw^{*}=0$ and $\Gamma(w^{*})=0$. The former means $(w^{*})^{T}Aw^{*}=\sum_{\mu}(w^{*}\cdot x^{\mu})^{2}=0$. Therefore, we must have $w^{*}\cdot x^{\mu}=0$, for all $\mu=1,\dots,C$. This simply means that the vector $w^{*}$ is orthogonal to all the patterns in the training set.
∎
Remark 2.
Note that the above theorem only proves that the obtained vector is orthogonal to the data set and says nothing about its degree of sparsity. The reason is that there is no guarantee that
the dual basis of a subspace be sparse. The introduction of the penalty function $g(w)$ in problem (III-A) only encourages sparsity by suppressing the small entries of $w$,
i.e. shifting them towards zero if they are really small or leaving them intact if they are rather large. And from the fact that $\Gamma(w^{*})=0$, we know this is true as the entries in
$w^{*}$ are either large or zero, i.e. there are no small entries. Our experimental results in section VII show that in fact this strategy works perfectly and the
learning algorithm results in sparse solutions.
III-C Avoiding the all-zero solution
Although in problem (III-A) we have the constraint $\|w\|_{2}=1$ to make sure that the algorithm does not converge to the trivial solution $w=0$, due to
approximations we made when developing the optimization algorithm, we should make sure to choose the parameters such that the all-zero solution is still avoided.
To this end, denote $w^{\prime}(t)=w(t)-\alpha_{t}y(t)\left(x(t)-\frac{y(t)w(t)}{\|w(t)\|_{2}^{2}}\right)$ and consider the following inequalities:
$$\displaystyle\|w(t+1)\|_{2}^{2}$$
$$\displaystyle=$$
$$\displaystyle\|w(t)-\alpha_{t}y(t)\left(x(t)-\frac{y(t)w(t)}{\|w(t)\|_{2}^{2}}\right)$$
$$\displaystyle-$$
$$\displaystyle\alpha_{t}\eta\Gamma(w(t))\|_{2}^{2}$$
$$\displaystyle=$$
$$\displaystyle\|w^{\prime}(t)\|^{2}+\alpha_{t}^{2}\eta^{2}\|\Gamma(w(t))\|^{2}$$
$$\displaystyle-$$
$$\displaystyle 2\alpha_{t}\eta\Gamma(w(t))\cdot w^{\prime}(t)$$
$$\displaystyle\geq$$
$$\displaystyle\|w^{\prime}(t)\|_{2}^{2}-2\alpha_{t}\eta\Gamma(w(t))\cdot w^{%
\prime}(t)$$
Now in order to have $\|w(t+1)\|_{2}^{2}>0$, we must have that $2\alpha_{t}\eta|\Gamma(w(t))^{T}w^{\prime}(t)|<\|w^{\prime}(t)\|_{2}^{2}$. Given that, $|\Gamma(w(t))\cdot w^{\prime}(t)|\leq\|w^{\prime}(t)\|_{2}\|\Gamma(w(t))\|_{2}$, it is therefore sufficient to have $2\alpha_{t}\eta\|\Gamma(w(t))\|_{2}<\|w^{\prime}(t)\|_{2}$. On the other hand, we have:
$$\displaystyle\|w^{\prime}(t)\|_{2}^{2}$$
$$\displaystyle=$$
$$\displaystyle\|w(t)\|_{2}^{2}+\alpha_{t}^{2}y(t)^{2}\|x(t)-\frac{y(t)w(t)}{\|w%
(t)\|_{2}^{2}}\|_{2}^{2}$$
(14)
$$\displaystyle\geq$$
$$\displaystyle\|w(t)\|_{2}^{2}$$
As a result, in order to have $\|w(t+1)\|_{2}^{2}>0$, it is sufficient to have $2\alpha_{t}\eta\|\Gamma(w(t))\|_{2}<\|w(t)\|_{2}$. Finally, since we have
$|\Gamma(w(t))|\leq|w(t)|$ (entry-wise), we know that $\|\Gamma(w(t))\|_{2}\leq\|w(t)\|_{2}$. Therefore, having $2\alpha_{t}\eta<1\leq\|w(t)\|_{2}/\|\Gamma(w(t))\|_{2}$ ensures $\|w(t)\|_{2}>0$.
Remark 3.
Interestingly, the above choice for the function $w-\eta\Gamma(w)$ looks very similar to the soft thresholding function (15) introduced in [27] to
perform iterative compressed sensing. The authors show that their choice of the sparsity function is very competitive in the sense that one can not get much better results by choosing other
thresholding functions. However, one main difference between their work and that of ours is that we enforce the sparsity as a penalty in equation (4) while
they apply the soft thresholding function in equation (15) to the whole $w$, i.e. if the updated value of $w$ is larger than a threshold, it is left intact while it
will be put to zero otherwise.
$$f_{t}(x)=\left\{\begin{array}[]{ll}x-\theta_{t}&\mbox{if $x>\theta_{t}$};\\
x+\theta_{t}&\mbox{if $x<-\theta_{t}$}\\
0&\mbox{otherwise}.\end{array}\right.$$
(15)
where $\theta_{t}$ is the threshold at iteration $t$ and tends to zero as $t$ grows.
III-D Making the Algorithm Parallel
In order to find $m$ constraints, we need to repeat Algorithm 1 several times. Fortunately, we can repeat this process in parallel, which speeds up the algorithm and is
more meaningful from a biological point of view as each constraint neuron can act independently of other neighbors. Although doing the algorithm in parallel may result in linearly dependent
constraints once in a while, our experimental results show that starting from different random initial points, the algorithm converges to different distinct constraints most of the time.
And the chance of getting redundant constraints reduces if we start from a sparse random initial point. Besides, as long as we have enough distinct constraints, the recall algorithm in the
next section can start eliminating noise and there is no need to learn all the distinct basis vectors of the null space defined by the training patterns (albeit the performance improves as we learn more and more linearly independent constraints). Therefore, we will use the parallel version to
have a faster algorithm in the end.
IV Recall Phase
In the recall phase, we are going to design an iterative algorithm that corresponds to message passing on a graph. The algorithm exploits the fact that our learning algorithm resulted in the connectivity matrix of the neural graph which is sparse and orthogonal to the memorized patterns. Therefore, given a noisy version
of the learned patterns, we can use the feedback from the constraint neurons in Fig. 1 to eliminate noise. More specifically, the linear input sums to the constraint
neurons are given by the elements of the vector $W(x^{\mu}+z)=Wx^{\mu}+Wz=Wz$, with $z$ being the integer-valued input noise (biologically speaking, the noise can be interpreted as a neuron
skipping some spikes or firing more spikes than it should). Based on observing the elements of $Wz$, each constraint neuron feeds back a message (containing info about $z$) to its neighboring pattern neurons. Based on this feedback, and exploiting the fact that $W$ is sparse, the pattern neurons update their states in order to reduce the noise $z$.
It must also be mentioned that we initially assume assymetric neural weights during the recall phase. More specifically, we assume the backward weight from constraint neuron $i$ to pattern
neuron $j$, denoted by $W^{b}_{ij}$ be equal to the sign of the weight from pattern neuron $i$ to constraint neuron $j$, i.e. $W^{b}_{ij}=\hbox{sign}(W_{ij})$, where sign(x) is equal to $+1$, $0$ or $-1$ if $x>0$, $x=0$ or $x<0$, respectively. This assumption simplifies the
error correction analysis. Later in section IV-B, we are going to consider another version of the algorithm which works with symmetric weights, i.e. $W^{b}_{ij}=W_{ij}$, and compare the performance
of all suggested algorithms together in section VII.
IV-A The Recall Algorithms
The proposed algorithm for the recall phase comprises a series of forward and backward iterations. Two different methods are suggested in this paper, which slightly differ from each other
in the way pattern neurons are updated. The first one is based on the Winner-Take-All approach (WTA) and is given by Algorithm 2. In this version, only
the pattern node that receives the highest amount of normalized feedback updates its state while the other pattern neurons maintain their current states. The normalization is done with
respect to the degree of each pattern neuron, i.e. the number of edges connected to each pattern neuron in the neural graph. The winner-take-all circuitry can be easily added to the neural
model shown in Figure 1 using any of the classic WTA methods [14].
The second approach, given by Algorithm 3, is much simpler: in every iteration, each pattern neuron decides locally whether or not to update its current state. More specifically, if the amount of feedback received by a pattern neuron exceeds a threshold, the neuron updates its state; otherwise, it remains unchanged.444Note that in order to
maintain the current value of a neuron in case no input feedback is received, we can add self-loops to pattern neurons in Figure 1. These self-loops are not shown in
the figure for clarity.
In both algorithms, the quantity $g^{(2)}_{j}$ can be interpreted as the number of feedback messages received by pattern neuron $x_{j}$ from the constraint neurons. On the other hand, the sign of $g^{(1)}_{j}$
provides an indication of the sign of the noise that affects $x_{j}$, and $|g^{(1)}_{j}|$
indicates the confidence level in the decision regarding the sign of the noise.
It is worthwhile mentioning that the Majority-Voting decoding algorithm is very similar to the Bit-Flipping algorithm of Sipser and Spielman to decode LDPC codes [31] and a similar
approach in [32] for compressive sensing methods.
Remark 4.
To give the reader some insight about why the neural graph should be sparse in order for the above algorithms to work, consider the backward iteration of both algorithms: it is based on
counting the fraction of received input feedback messages from the neighbors of a pattern neuron. In the extreme case, if the neural graph is complete, then a single noisy pattern neuron results in
the violation of all constraint neurons in the forward iteration. As a result, in the backward iteration all the pattern neurons receive feedback from their neighbors and it is impossible to
tell which of the pattern neuron is the noisy one.
However, if the graph is sparse, a single noisy pattern neuron only makes some of the constraints unsatisfied. Consequently, in the recall phase only the nodes which share the neighborhood
of the noisy node receive input feedbacks. And the fraction of the received feedbacks would be much larger for the original noisy node. Therefore, by merely looking at the fraction of
received feedback from the constraint neurons, one can identify the noisy pattern neuron with high probability as long as the graph is sparse and the input noise is reasonable bounded.
IV-B Some Practical Modifications
Although algorithm 3 is fairly simple and practical, each pattern neuron still needs two types of information: the number of received feedbacks and the
net input sum. Although one can think of simple neural architectures to obtain the necessary information, we can modify the recall algorithm to make it more practical and simpler. The trick
is to replace the degree of each node $x_{j}$ with the $\ell_{1}$-norm of the outgoing weights. In other words, instead of using $\|w_{j}\|_{0}=d_{j}$, we use $\|w_{j}\|_{1}$.
Furthermore, we assume symmetric weights, i.e $W^{b}_{ij}=W_{ij}$.
Interestingly, in some of our experimental results corresponding to denser graphs, this approach performs much better, as will be illustrated in section VII. One possible reason behind this improvement might be the fact that using the $\ell_{1}$-norm instead of the $\ell_{0}$-norm in 3 will result in better differentiation
between two vectors that have the same number of non-zero elements, i.e. have equal $\ell_{0}$-norms, but differ from each other in the magnitude of the element, i.e. their $\ell_{1}$-norms
differ. Therefore, the network may use this additional information in order to identify the noisy nodes in each update of the recall algorithm.
V Performance Analysis
In order to obtain analytical estimates on the recall probability of error, we assume that the connectivity graph $W$ is sparse. With respect to this graph, we define the pattern and
constraint degree distributions as follows.
Definition 1.
For the bipartite graph $W$, let $\lambda_{i}$ ($\rho_{j}$) denote the fraction of edges that are adjacent to pattern (constraint) nodes of degree $i$ ($j$). We call $\{\lambda_{1},\dots,\lambda_{m}\}$ and
$\{\rho_{1},\dots,\rho_{n}\}$ the pattern and constraint degree distribution form the edge perspective, respectively. Furthermore, it is convenient to define the degree distribution polynomials
as
$$\lambda(z)=\sum_{i}\lambda_{i}z^{i-1}\hbox{ and }\rho(z)=\sum_{i}\rho_{i}z^{i-%
1}.$$
The degree distributions are determined after the learning phase is finished and in this section we assume they are given. Furthermore, we consider an ensemble of random neural graphs with
a given degree distribution and investigate the average performance of the recall algorithms over this ensemble. Here, the word ”ensemble” refers to the fact that we assume having a number of random neural graphs with
the given degree distributions and do the analysis for the average scenario.
To simplify analysis, we assume that the noise entries are $\pm 1$. However, the proposed recall algorithms can work with any integer-valued noise and our experimental results suggest that this assumption is not necessary in practice.
Finally, we assume that the errors do not cancel each other out in the constraint neurons (as long as the number of errors is fairly bounded). This is in fact a realistic assumption because
the neural graph is weighted, with weights belonging to the real field, and the noise values are integers. Thus, the probability that the weighted sum of some integers be equal to zero is
negligible.
We do the analysis only for the Majority-Voting algorithms since if we choose the Majority-Voting update threshold $\varphi=1$, roughly speaking, we will have the winner-take-all
algorithm.555It must be mentioned that choosing $\varphi=1$ does not yield the WTA algorithm exactly because in the original WTA, only one node is updated in each round. However,
in this version with $\varphi=1$, all nodes that receive feedback from all their neighbors are updated. Nevertheless, the performance of the both algorithms is rather similar.
As mentioned earlier, in this paper we will perform the analysis for general sparse bipartite graphs. However, restricting ourselves to a particular type of sparse graphs known as ”expander” allows us to prove stronger results on the recall error probabilities. More details can be found in Appendix C and in [10].
However, since it is very difficult, if not impossible in certain cases, to make a graph expander during an iterative learning method, we focus on the more general case of sparse neural graphs.
To start the analysis, let $\mathcal{E}_{t}$ denote the set of erroneous pattern nodes at iteration $t$, and $\mathcal{N}(\mathcal{E}_{t})$ be the set of constraint nodes that are connected to the nodes in
$\mathcal{E}_{t}$, i.e. these are the constraint nodes that have at least one neighbor in $\mathcal{E}_{t}$. In addition, let $\mathcal{N}^{c}(\mathcal{E}_{t})$ denote the (complimentary) set of constraint neurons that do not have any
connection to any node in $\mathcal{E}_{t}$. Denote also the average neighborhood size of $\mathcal{E}_{t}$ by $S_{t}=\mathbb{E}(|\mathcal{N}(\mathcal{E}_{t})|)$. Finally, let $\mathcal{C}_{t}$ be the set of correct pattern
nodes.
Based on the error correcting algorithm and the above notations, in a given iteration two types of error events are possible:
1.
Type-1 error event: A node $x\in\mathcal{C}_{t}$ decides to update its value. The probability of this phenomenon is denoted by $P_{e_{1}}(t)$.
2.
Type-2 error event: A node $x\in\mathcal{E}_{t}$ updates its value in the wrong direction. Let $P_{e_{2}}(t)$ denote the probability of error for this type.
We start the analysis by finding explicit expressions and upper bounds on the average of $P_{e_{1}}(t)$ and $P_{e_{2}}(t)$ over all nodes as a function $S_{t}$. We then find an exact relationship for
$S_{t}$ as a function of $|\mathcal{E}_{t}|$, which will provide us with the required expressions on the average bit error probability as a function of the number of noisy input symbols, $|\mathcal{E}_{0}|$.
Having found the average bit error probability, we can easily bound the block error probability for the recall algorithm.
V-A Error probability - type 1
To begin, let $P^{x}_{1}(t)$ be the probability that a node $x\in\mathcal{C}_{t}$ with degree $d_{x}$ updates its state. We have:
$$P^{x}_{1}(t)=\hbox{Pr}\{\frac{|\mathcal{N}(\mathcal{E}_{t})\cap\mathcal{N}(x)|%
}{d_{x}}\geq\varphi\}$$
(16)
where $\mathcal{N}(x)$ is the neighborhood of $x$. Assuming random construction of the graph and relatively large graph sizes, one can approximate $P^{x}_{1}(t)$ by
$$P_{1}^{x}(t)\approx\sum_{i=\lceil\varphi d_{x}\rceil}^{d_{x}}{d_{x}\choose i}%
\left(\frac{S_{t}}{m}\right)^{i}\left(1-\frac{S_{t}}{m}\right)^{d_{x}-i}.$$
(17)
In the above equation, $S_{t}/m$ represents the probabaility of having one of the $d_{x}$ edges connected to the $S_{t}$ constraint neurons that are neighbors of the erroneous pattern neurons.
As a result of the above equations, we have:
$$P_{e_{1}}(t)=\mathbb{E}_{d_{x}}(P^{x}_{1}(t)),$$
(18)
where $\mathbb{E}_{d_{x}}$ denote the expectation over the degree distribution $\{\lambda_{1},\dots,\lambda_{m}\}$.
Note that if $\varphi=1$, the above equation simplifies to
$$P_{e_{1}}(t)=\lambda\left(\frac{S_{t}}{m}\right)$$
V-B Error probability - type 2
A node $x\in\mathcal{E}_{t}$ makes a wrong decision if the net input sum it receives has a different sign than the sign of noise it experiences. Instead of finding an exact relation, we bound
this probability by the probability that the neuron $x$ shares at least half of its neighbors with other neurons, i.e. $P_{e_{2}}(t)\leq\hbox{Pr}\{\frac{|\mathcal{N}(\mathcal{E^{*}}_{t})\cap\mathcal%
{N}(x)|}{d_{x}}\geq 1/2\}$, where $\mathcal{E^{*}}_{t}=\mathcal{E}_{t}\setminus x$.
Letting $P^{x}_{2}(t)=\hbox{Pr}\{\frac{|\mathcal{N}(\mathcal{E^{*}}_{t})\cap\mathcal{N}%
(x)|}{d_{x}}\geq 1/2|\hbox{deg}(x)=d_{x}\}$, we will have:
$$P^{x}_{2}(t)=\sum_{i=\lceil d_{x}/2\rceil}^{d_{x}}{d_{x}\choose i}\left(\frac{%
S^{*}_{t}}{m}\right)^{i}\left(1-\frac{S^{*}_{t}}{m}\right)^{d_{x}-i}$$
(19)
where $S^{*}_{t}=\mathbb{E}(|\mathcal{N}(\mathcal{E}^{*}_{t})|)$
Therefore, we will have:
$$P_{e_{2}}(t)\leq\mathbb{E}_{d_{x}}(P^{x}_{2}(t))$$
(20)
Combining equations (18) and (20), the bit error probability at iteration $t$ would be
$$\displaystyle P_{b}(t+1)$$
$$\displaystyle=$$
$$\displaystyle\hbox{Pr$\{x\in\mathcal{C}_{t}\}$}P_{e_{1}}(t)+\hbox{Pr$\{x\in%
\mathcal{E}_{t}\}$}P_{e_{2}}(t)$$
(21)
$$\displaystyle=$$
$$\displaystyle\frac{n-|\mathcal{E}_{t}|}{n}P_{e_{1}}(t)+\frac{|\mathcal{E}_{t}|%
}{n}P_{e_{2}}(t)$$
And finally, the average block error rate is given by the probability that at least one pattern node $x$ is in error. Therefore:
$$P_{e}(t)=1-(1-P_{b}(t))^{n}$$
(22)
Equation (22) gives the probability of making a mistake in iteration $t$. Therefore, we can bound the overall probability of error, $P_{E}$, by setting $P_{E}=\lim_{t\rightarrow\infty}P_{e}(t)$. To this end, we have to recursively update $P_{b}(t)$ in equation (21) and using $|\mathcal{E}_{t+1}|\approx nP_{b}(t+1)$. However, since we have assumed that the noise values are $\pm 1$, we can provide an upper bound on the total probability of error by considering
$$\displaystyle P_{E}$$
$$\displaystyle\leq$$
$$\displaystyle P_{e}(1)$$
(23)
In other words, we assume that the recall algorithms either correct the input error in the first iteration or an error is declared. Obviously, this bound is not tight as in practice and one might be able to correct errors in later iterations. In fact simulation results confirm this expectation. However, this approach provides a nice analytical upper bound since it only depends on the initial number of noisy nodes. As the initial number of noisy nodes grow, the above bound becomes tight. Thus, in summary we have:
$$P_{E}\leq 1-(1-\frac{n-|\mathcal{E}_{0}|}{n}\bar{P}_{1}^{x}-\frac{|\mathcal{E}%
_{0}|}{n}\bar{P}_{2}^{x})^{n}$$
(24)
where $\bar{P}_{i}^{x}=\mathbb{E}_{d_{x}}\{P_{i}^{x}\}$ and $|\mathcal{E}_{0}|$ is the number of noisy nodes in the input pattern initially.
Remark 5.
One might hope to further simplify the above inequalities by finding closed form approximation of equations (17) and (19). However, as one expects, this approach leads to very loose and trivial bounds in many cases. Therefore, in our experiments shown in section
VII we compare simulation results to the theoretical bound derived using equations (17) and (19).
Now, what remains to do is to find an expression for $S_{t}$ and $S^{*}_{t}$ as a function of $|\mathcal{E}_{t}|$. The following lemma will provide us with the required relationship.
Lemma 3.
The average neighborhood size $S_{t}$ in iteration $t$ is given by:
$$S_{t}=m\left(1-(1-\frac{\bar{d}}{m})^{|\mathcal{E}_{t}|}\right)$$
(25)
where $\bar{d}$ is the average degree for pattern nodes.
Proof.
The proof is given in Appendix A.
∎
VI Pattern Retrieval Capacity
It is interesting to see that, except for its obvious influence on the learning time, the number of patterns $C$ does not have any effect in the learning or recall algorithm. As long as
the patterns come from a subspace, the learning algorithm will yield a matrix which is orthogonal to all of the patterns in the training set. And in the recall phase, all we deal with is
$Wz$, with $z$ being the noise which is independent of the patterns.
Therefore, in order to show that the pattern retrieval capacity is exponential with $n$, all we need to show is that there exists a ”valid” training set $\mathcal{X}$ with $C$ patterns of length
$n$ for which $C\propto a^{rn}$, for some $a>1$ and $0<r$. By valid we mean that the patterns should come from a subspace with dimension $k<n$ and the entries in the patterns should be
non-negative integers. The next theorem proves the desired result.
Theorem 4.
Let $\mathcal{X}$ be a $C\times n$ matrix, formed by $C$ vectors of length $n$ with non-negative integers entries between $0$ and $Q-1$. Furthermore, let $k=rn$ for some $0<r<1$. Then, there
exists a set of such vectors for which $C=a^{rn}$, with $a>1$, and $\hbox{rank}(\mathcal{X})=k<n$.
Proof.
The proof is based on construction: we construct a data set $\mathcal{X}$ with the required properties. To start, consider a matrix $G\in\mathbb{R}^{k\times n}$ with rank $k$ and $k=rn$, with
$0<r<1$. Let the entries of $G$ be non-negative integers, between $0$ and $\gamma-1$, with $\gamma\geq 2$.
We start constructing the patterns in the data set as follows: consider a set of random vectors $u^{\mu}\in\mathbb{R}^{k}$, $\mu=1,\dots,C$, with integer-valued entries between $0$ and $\upsilon-1$, where $\upsilon\geq 2$. We set the pattern $x^{\mu}\in\mathcal{X}$ to be $x^{\mu}=u^{\mu}\cdot G$, if all the entries of $x^{\mu}$ are between $0$ and $Q-1$. Obviously, since both $u^{\mu}$ and $G$ have only
non-negative entries, all entries in $x^{\mu}$ are non-negative. Therefore, it is the $Q-1$ upper bound that we have to worry about.
The $j^{th}$ entry in $x^{\mu}$ is equal to $x_{j}^{\mu}=u^{\mu}\cdot G_{j}$, where $G_{j}$ is the $j^{th}$ column of $G$. Suppose $G_{j}$ has $d_{j}$ non-zero elements. Then, we have:
$$x_{j}^{\mu}=u^{\mu}\cdot G_{j}\leq d_{j}(\gamma-1)(\upsilon-1)$$
Therefore, denoting $d^{*}=\max_{j}d_{j}$, we could choose $\gamma$, $\upsilon$ and $d^{*}$ such that
$$Q-1\geq d^{*}(\gamma-1)(\upsilon-1)$$
(26)
to ensure all entries of $x^{\mu}$ are less than $Q$.
As a result, since there are $\upsilon^{k}$ vectors $u$ with integer entries between $0$ and $\upsilon-1$, we will have $\upsilon^{k}=\upsilon^{rn}$ patterns forming $\mathcal{X}$. Which means $C=\upsilon^{rn}$, which would be an exponential number in $n$ if $\upsilon\geq 2$.
∎
As an example, if $G$ can be selected to be a sparse $200\times 400$ matrix with $0/1$ entries (i.e. $\gamma=2$) and $d^{*}=10$, and $u$ is also chosen to be a vector with $0/1$ elements
(i.e. $\upsilon=2$), then it is sufficient to choose $Q\geq 11$ to have a pattern retrieval capacity of $C=2^{rn}$.
Remark 6.
Note that inequality (26) was obtained for the worst-case scenario and in fact is very loose. Therefore, even if it does not hold, we will still be able to memorize a
very large number of patterns since a big portion of the generated vectors $x^{\mu}$ will have entries less than $Q$. These vectors correspond to the message vectors $u^{\mu}$ that are ”sparse”
as well, i.e. do not have all entries greater than zero. The number of such vectors is a polynomial in $n$, the degree of which depends on the number of non-zero entries in $u^{\mu}$.
VII Simulation Results
VII-A Simulation Scenario
We have simulated the proposed learning and recall algorithms for three different network sizes $n=200,400,800$, with $k=n/2$ for all cases. For each case, we considered a few different
setups with different values for $\alpha$, $\eta$, and $\theta$ in the learning algorithm 1, and different $\varphi$ for the Majority-Voting recall algorithm
3. For brevity, we do not report all the results for various combinations but present only a selection of them to give insight on the performance of the proposed algorithms.
In all cases, we generated $50$ random training sets using the approach explained in the proof of theorem 4, i.e. we generated a generator matrix $G$ at
random with $0/1$ entries and $d^{*}=10$. We also used $0/1$ generating message words $u$ and put $Q=11$ to ensure the validity of the generated training set.
However, since in this setup we will have $2^{k}$ patterns to memorize, doing a simulation over all of them would take a lot of time. Therefore, we have selected a random sample sub-set $\mathcal{X}$
each time with size $C=10^{5}$ for each of the $50$ generated sets and used these subsets as the training set.
For each setup, we performed the learning algorithm and then investigated the average sparsity of the learned constraints over the ensemble of $50$ instances. As explained earlier, all the
constraints for each network were learned in parallel, i.e. to obtain $m=n-k$ constraints, we executed Algorithm 1 from random initial points $m$ time.
As for the recall algorithms, the error correcting performance was assessed for each set-up, averaged over the ensemble of $50$ instances. The empirical results are compared to the
theoretical bounds derived in Section V as well.
VII-B Learning Phase Results
In the learning algorithm, we pick a pattern from the training set each time and adjust the weights according to Algorithm 1. Once we have gone over all the patterns, we repeat this operation several times to make sure that update for one pattern does not adversely affect the other learned patterns. Let $t$ be the iteration number of the learning algorithm, i.e. the number of times we have gone over the
training set so far. Then we set $\alpha_{t}\propto\alpha_{0}/t$ to ensure the conditions of Theorem 1 is satisfied. Interestingly, all of the constraints converged in
at most two learning iterations for all different setups. Therefore, the learning is very fast in this case.
Figure 3 illustrates the percentage of pattern nodes with the specified sparsity measure defined as $\varrho=\kappa/n$, where $\kappa$ is the number of non-zero
elements. From the figure we notice two trends. The first is the effect of sparsity threshold, which as it is increased, the network becomes sparser. The second one is the effect of
network size, which as it grows, the connections become sparser.
VII-C Recall Phase Results
For the recall phase, in each trial we pick a pattern randomly from the training set, corrupt a given number of its symbols with $\pm 1$ noise and use the suggested algorithm to correct the
errors. A pattern error is declared if the output does not match the correct pattern. We compare the performance of the two recall algorithms: Winner-Take-All (WTA) and Majority-Voting (MV). Table I shows the simulation parameters in the recall phase for all scenarios (unless specified otherwise).
Figure 4 illustrates the effect of the sparsity threshold $\theta$ on the performance of the error correcting algorithm in the recall phase. Here, we have $n=400$ and $k=200$. Two different sparsity thresholds are compared together, namely $\theta_{t}\propto 0.031/t$ and $\theta_{t}\propto 0.021/t$. Clearly, as network becomes sparser, i.e. $\theta$ increases, the performance of both recall algorithms improve.
In Figure 5 we have investigated the effect of network size on the performance of recall algorithms by comparing the pattern error rates for two different network size, namely $n=800$ and $n=400$ with $k=n/2$ in both cases. As obvious from the figure, the performance improves to a great extent when we have a larger network. This is partially because of the fact that in larger networks, the connections are relatively sparser as well.
Figure 6 compares the results obtained in simulation with the upper bound derived in Section V. Note that as expected, the bound is quite loose since in
deriving inequality (22) we only considered the first iteration of the algorithm.
We have also investigated the tightness of the bound given in equation (23) with simulation results. To this end, we compare $P_{e}(1)$ and $\lim_{t\rightarrow\infty}P_{e}(t)$ in our simulations for the case of $\pm 1$ noise. Figure 7 illustrates the result and it is evident that allowing the recall algorithm to iterate improves the final probability of error to a great extent.
Finally, we investigate the performance of the modified more practical version of the Majority-Voting algorithm, which was explained in Section IV-B. Figure 8 compares the performance of the WTA and original MV algorithms with the modified version of MV algorithm for a network with size $n=200$, $k=100$ and learning parameters $\alpha_{t}\propto 0.45/t$, $\eta=0.45$ and $\theta_{t}\propto 0.015/t$. The neural graph of this particular example is rather dense, because of small $n$ and sparsity threshold $\theta$. Therefore, here the modified version of the Majority-Voting algorithm performs better because of the extra information provided by the $\ell_{1}$-norm (than the $\ell_{0}$-norm in the original version of the Majority-Voting algorithm). However, note that we did not observe this trend for the other simulation scenarios where the neural graph was sparser.
VIII Conclusions and Future Works
In this paper, we proposed a neural associative memory which is capable of exploiting inherent redundancy in input patterns to enjoy an exponentially large pattern retrieval capacity.
Furthermore, the proposed method uses simple iterative algorithms for both learning and recall phases which makes gradual learning possible and maintain rather good recall performances.
The convergence of the proposed learning algorithm was proved using techniques from stochastic approximation. We also analytically investigated the performance of the recall algorithm by
deriving an upper bound on the probability of recall error as a function of input noise. Our simulation results confirms the consistency of the theoretical results with those obtained in
practice, for different network sizes and learning/recall parameters.
Improving the error correction capabilities of the proposed network is definitely a subject of our future research. We have already started investigating this issue and proposed a different
network structure which reduces the error correction probability by a factor of $10$ in many cases [22]. We are working on different structures to obtain even more robust
recall algorithms.
Extending this method to capture other sorts of redundancy, i.e. other than belonging to a subspace, will be another topic which we would like to explore in future.
Finally, considering some practical modifications to the learning and recall algorithms is of great interest. One good example is simultaneous learn and recall capability, i.e. to have a
network which learns a subset of the patterns in the subspace and move immediately to the recall phase. Now during the recall phase, if the network is given a noisy version of the patterns
previously memorized, it eliminates the noise using the algorithms described in this paper. However, if it is a new pattern, i.e. one that we have not learned yet, the network adjusts the
weights in order to learn this pattern as well. Such model is of practical interest and closer to real-world neuronal networks. Therefore, it would be interesting to design a network with
this capability while maintaining good error correcting capabilities and large pattern retrieval capacities.
Acknowledgment
The authors would like to thank Prof. Wulfram Gerstner and his lab members, as well as Mr. Amin Karbasi for their helpful comments and discussions. This work was supported by Grant
228021-ECCSciEng of the European Research Council.
Appendix A Average neighborhood size
In this appendix, we find an expression for the average neighborhood size for erroneous nodes, $S_{t}=\mathbb{E}(|\mathcal{N}(\mathcal{E}_{t})|)$. Towards this end, we assume the following procedure for
constructing a right-irregular bipartite graph:
•
In each iteration, we pick a variable node $x$ with a degree randomly determined according to the given degree distribution.
•
Based on the given degree $d_{x}$, we pick $d_{x}$ constraint nodes uniformly at random with replacement and connect $x$ to the constraint node.
•
We repeat this process $n$ times, until all variable nodes are connected.
Note that the assumption that we do the process with replacement is made to simplify the analysis. This assumption becomes more exact as $n$ grows.
Having the above procedure in mind, we will find an expression for the average number of constraint nodes in each construction round. More specifically, we will find the average number of
constraint nodes connected to $i$ pattern nodes at round $i$ of construction. This relationship will in turn yields the average neighborhood size of $|\mathcal{E}_{t}|$ erroneous nodes in
iteration $t$ of error correction algorithm described in section IV.
With some abuse of notations, let $S_{e}$ denote the number of constraint nodes connected to pattern nodes in round $e$ of construction procedure mentioned above. We write $S_{e}$ recursively
in terms of $e$ as follows:
$$\displaystyle S_{e+1}$$
$$\displaystyle=$$
$$\displaystyle\mathbb{E}_{d_{x}}\{\sum_{j=0}^{d_{x}}{d_{x}\choose j}\left(\frac%
{S_{e}}{m}\right)^{d_{x}-j}\left(1-\frac{S_{e}}{m}\right)^{j}(S_{e}+j)\}$$
(27)
$$\displaystyle=$$
$$\displaystyle\mathbb{E}_{d_{x}}\{S_{e}+d_{x}(1-S_{e}/m)\}$$
$$\displaystyle=$$
$$\displaystyle S_{e}+\bar{d}(1-S_{e}/m)$$
Where $\bar{d}=\mathbb{E}_{d_{x}}\{d_{x}\}$ is the average degree of the pattern nodes. In words, the first line calculates the average growth of the neighborhood when a new variable node is
added to the graph. The proceeding equalities directly follows from relationship on binomial sums. Noting that $S_{1}=\bar{d}$, one obtains:
$$\displaystyle S_{t}=m\left(1-(1-\frac{\bar{d}}{m})^{|\mathcal{E}_{t}|}\right)$$
(28)
In order to verify the correctness of the above analysis, we have performed some simulations for different network sizes and degree distributions obtained from the graphs returned by the
learning algorithm. We generated $100$ random graphs and calculated the average neighborhood size in each iteration over these graphs. Furthermore, two different network sizes were
considered $n=100,200$ and $m=n/2$ in all cases, where $n$ and $m$ are the number of pattern and constraint nodes, respectively. The result for $n=100,m=50$ is shown in Figure
9, where the average neighborhood size in each iteration is illustrated and compared with theoretical estimations given by equation (28). Figure
10 shows similar results for $n=200$, $m=100$.
In the figure,the dashed line shows the average neighborhood size over these graphs. The solid line corresponds to theoretical estimations. It is obvious that the theoretical value
is an exact approximation of the simulation results.
Appendix B Expander Graphs
This section contains the definitions and the necessary background on expander graphs.
Definition 2.
A regular $(d_{p},d_{c},n,m)$ bipartite graph $W$ is a
bipartite graph between $n$ pattern nodes of degree $d_{p}$ and $m$
constraint nodes of degree $d_{c}$.
Definition 3.
An $(\alpha n,\beta d_{p})$-expander is a $(d_{p},d_{c},n,m)$ bipartite
graph such that for any subset $\mathcal{P}$ of pattern nodes with
$|\mathcal{P}|<\alpha n$ we have $|\mathcal{N}(\mathcal{P})|>\beta d_{p}|\mathcal{P}|$ where
$\mathcal{N}(\mathcal{P})$ is the set of neighbors of $\mathcal{P}$ among the constraint nodes.
The following result from [31] shows the existence of families of expander graphs with parameter values that are relevant to us.
Theorem 5.
[31]
Let $W$ be a randomly chosen $(d_{p},d_{c})-$regular bipartite graph between $n$ $d_{p}-$regular vertices and $m=(d_{p}/d_{c})$ $d_{c}-$regular vertices. Then for all $0<\alpha<1$, with high
probability, all sets of $\alpha n$ $d_{p}-$regular vertices in $W$ have at least
$$n\left(\frac{d_{p}}{d_{c}}(1-(1-\alpha)^{d_{c}})-\sqrt{\frac{2d_{c}\alpha h(%
\alpha)}{\log_{2}e}}\right)$$
neighbors, where $h(\cdot)$ is the binary entropy function.
The following result from [33] shows the existence of families of expander graphs with parameter values that are relevant to us.
Theorem 6.
Let $d_{c}$, $d_{p}$, $m$, $n$ be integers, and let $\beta<1-1/d_{p}$. There exists a small $\alpha>0$ such that if $W$ is a
$(d_{p},d_{c},n,m)$ bipartite graph chosen uniformly at random from the
ensemble of such bipartite graphs, then $W$ is an $(\alpha n,\beta d_{p})$-expander with probability $1-o(1)$, where $o(1)$ is a term
going to zero as $n$ goes to infinity.
Appendix C Analysis of the Recall Algorithms for Expander Graphs
C-A Analysis of the Winner-Take-All Algorithm
We prove the error correction capability of the winner-take-all algorithm in two steps: first we
show that in each iteration, only pattern neurons that are corrupted by
noise will be chosen by the winner-take-all strategy to update their state. Then, we prove that the update is in
the right direction, i.e. toward removing noise from the neurons.
Lemma 7.
If the constraint matrix $W$ is an $(\alpha n,\beta d_{p})$ expander, with $\beta>1/2$,
and the original number of erroneous neurons are less than or equal to $2$, then in each iteration of
the winner-take-all algorithm only the corrupted pattern nodes
update their value and the other nodes remain intact. For $\beta=3/4$, the
algorithm will always pick the correct node if we have two or fewer
erroneous nodes.
Proof.
If we have only one node $x_{i}$ in error, it is obvious that the
corresponding node will always be the winner of the winner-take-all
algorithm unless there exists another node that has the same set of
neighbors as $x_{i}$. However, this is impossible as because of the
expansion properties, the neighborhood of these two nodes must have
at least $2\beta d_{p}$ members which for $\beta>1/2$ is strictly greater than $d_{p}$. As a result, no two nodes can have the same neighborhood
and the winner will always be the correct node.
In the case where there are two erroneous nodes, say $x_{i}$ and $x_{j}$,
let $\mathcal{E}$ be the set $\{x_{i},x_{j}\}$ and $\mathcal{N}(\mathcal{E})$ be the corresponding
neighborhood on the constraint nodes side. Furthermore, assume $x_{i}$ and $x_{j}$ share $d_{p^{\prime}}$ of
their neighbors so that $|\mathcal{N}(\mathcal{E})|=2d_{p}-d_{p^{\prime}}$. Now because of the expansion properties:
$$|\mathcal{N}(\mathcal{E})|=2d_{p}-d_{p^{\prime}}>2\beta d_{p}\Rightarrow d_{p^%
{\prime}}<2(1-\beta)d_{p}.$$
Now we have to show that there are no nodes other than $x_{i}$ and
$x_{j}$ that can be the winner of the winner-take-all algorithm. To
this end, note that only those nodes that are connected to $N(\mathcal{E})$
will receive some feedback and can hope to be the winner of the process.
So let’s consider such a node $x_{\ell}$ that is connected to
$d_{p_{\ell}}$ of the nodes in $N(\mathcal{E})$. Let $\mathcal{E}^{\prime}$ be $\mathcal{E}\cup\{x_{\ell}\}$ and
$N(\mathcal{E}^{\prime})$ be the corresponding neighborhood. Because of the expansion
properties we have $|N(\mathcal{E}^{\prime})|=d_{p}-d_{p_{\ell}}+|N(\mathcal{E})|>3\beta d_{p}$. Thus:
$$\displaystyle d_{p_{\ell}}$$
$$\displaystyle<$$
$$\displaystyle d_{p}+|N(\mathcal{E})|-3\beta d_{p}=3d_{p}(1-\beta)-d_{p^{\prime%
}}.$$
Now, note that the nodes $x_{i}$ and $x_{j}$ will receive some feedback
from $2d_{p}-d_{p^{\prime}}$ edges because we assume there is no noise cancellation due to the fact that neural weights are real-valued and noise entries are integers. Since $2d_{p}-d_{p^{\prime}}>3d_{p}(1-\beta)-d_{p^{\prime}}$ for $\beta>1/2$, we conclude that $d_{p}-d_{p^{\prime}}>d_{p_{\ell}}$ which proves that no node outside $\mathcal{E}$ can be
picked during the winner-take-all algorithm as long as $|\mathcal{E}|\leq 2$
for $\beta>1/2$.
∎
In the next lemma, we show that the state of erroneous neurons is updated
in the direction of reducing the noise.
Lemma 8.
If the constraint matrix $W$ is an $(\alpha n,\beta d_{p})$ expander, with $\beta>3/4$,
and the original number of erroneous neurons is less than or equal $e_{\min}=2$, then in each iteration of
the winner-take-all algorithm the winner is updated toward reducing
the noise.
Proof.
When there is only one erroneous node, it is obvious that all its neighbors
agree on the direction of update and the node reduces the amount of
noise by one unit.
If there are two nodes $x_{i}$ and $x_{j}$ in error, since the number of
their shared neighbors is less than $2(1-\beta)d_{p}$ (as we proved in the
last lemma), then more than half of their neighbors would be unique if $\beta\geq 3/4$. These unique neighbors agree on the
direction of update. Therefore, whoever the winner is will be
updated to reduce the amount of noise by one unit.
∎
The following theorem sums up the results of the previous lemmas to
show that the winner-take-all algorithm is guaranteed to perform error correction.
Theorem 7.
If the constraint matrix $W$ is an $(\alpha n,\beta d_{p})$ expander, with $\beta\geq 3/4$,
then the winner-take-all algorithm is guaranteed to correct at least
$e_{\min}=2$ positions in error, irrespective of the magnitudes of the errors.
Proof.
The proof is immediate from Lemmas 7 and 8.
∎
C-B Analysis of the Majority Algorithm
Roughly speaking, one would expect the Majority-Voting algorithm to be sub-optimal in comparison to the winner-take-all strategy, since the pattern neurons need to make independent decisions,
and are not allowed to cooperate amongst themselves. In this subsection, we show that despite this restriction, the Majority-Voting algorithm is capable of error correction; the sub-optimality
in comparison to the winner-take-all algorithm can be quantified in terms of a larger expansion factor $\beta$ being required for the graph.
Theorem 8.
If the constraint matrix $W$ is an $(\alpha n,\beta d_{p})$ expander with $\beta>\frac{4}{5}$,
then the Majority-Voting algorithm with $\varphi=\frac{3}{5}$ is guaranteed to correct at least
two positions in error, irrespective of the magnitudes of the errors.
Proof.
As in the proof for the winner-take-all case, we will show our result in two steps: first, by showing that for a suitable choice of the Majority-Voting threshold $\varphi$, that only the
positions in error are updated in each iteration, and that this update is towards reducing the effect of the noise.
Case 1
First consider the case that only one pattern node $x_{i}$ is in error. Let $x_{j}$ be any other pattern node, for some $j\neq i$. Let $x_{i}$ and $x_{j}$ have $d_{p^{\prime}}$ neighbors in common. As
argued in the proof of Lemma 7, we have that
$$d_{p^{\prime}}<2d_{p}(1-\beta).$$
(29)
Hence for $\beta=\frac{4}{5}$, $x_{i}$ receives non-zero feedback from at least $\frac{3}{5}d_{p}$ constraint nodes, while $x_{j}$ receives non-zero feedback from at most $\frac{2}{5}d_{p}$
constraint nodes. In this case, it is clear that setting $\varphi=\frac{3}{5}$ will guarantee that only the node in error will be updated, and that the direction of this update is towards
reducing the noise.
Case 2
Now suppose that two distinct nodes $x_{i}$ and $x_{j}$ are in error. Let $\mathcal{E}=\{x_{i},x_{j}\}$, and let $x_{i}$ and $x_{j}$ share $d_{p^{\prime}}$ common neighbors. If the noise corrupting these two
pattern nodes, denoted by $z_{i}$ and $z_{j}$, are such that $\hbox{sign}(z_{i})=\hbox{sign}(z_{j})$, then both $x_{i}$ and $x_{j}$ receive $-\hbox{sign}(z_{i})$ along all $d_{p}$ edges that they are
connected to during the backward iteration. Now suppose that $\hbox{sign}(z_{i})\neq\hbox{sign}(z_{j})$. Then $x_{i}$ ($x_{j}$) receives correct feedback from at least the $d_{p}-d_{p^{\prime}}$ edges in
$\mathcal{N}(\{x_{i}\})\backslash\mathcal{E}$ (resp. $\mathcal{N}(\{x_{j}\})\backslash\mathcal{E}$) during the backward iteration. Therefore, if $d_{p^{\prime}}<d_{p}/2$, the direction of update would be also correct and the
feedback will reduce noise during the update. And from equation (29) we know that for $\beta=4/5$, $d_{p^{\prime}}\leq 2d_{p}/5<d_{p}/2$. Therefore, the two noisy nodes will be
updated towards the correct direction.
Let us now examine what happens to a node $x_{\ell}$ that is different from the two erroneous nodes $x_{i},x_{j}$. Suppose that $x_{\ell}$ is connected to $d_{p_{\ell}}$ nodes in $\mathcal{N}(\mathcal{E})$. From
the proof of Lemma 7, we know that
$$\displaystyle d_{p_{\ell}}$$
$$\displaystyle<$$
$$\displaystyle 3d_{p}(1-\beta)-d_{p^{\prime}}$$
$$\displaystyle\leq$$
$$\displaystyle 3d_{p}(1-\beta).$$
Hence $x_{\ell}$ receives at most $3d_{p}(1-\beta)$ non-zero messages during the backward iteration.
For $\beta>\frac{4}{5}$, we have that $d_{p}-2d_{p}(1-\beta)>3d_{p}(1-\beta)$. Hence by setting $\beta=\frac{4}{5}$ and $\varphi=[d_{p}-2d_{p}(1-\beta)]/d_{p}=\frac{3}{5}$, it is clear
from the above discussion that we have ensured the following in the case of two erroneous pattern nodes:
•
The noisy pattern nodes are updated towards the direction of reducing noise.
•
No pattern node other than the erroneous pattern nodes is updated.
∎
C-C Minimum Distance of Patterns
Next, we present a sufficient condition such that the minimum Hamming distance666Two (possibly non-binary) $n-$length vectors $x$ and $y$ are said to be at a Hamming distance $d$
from each other if they are coordinate-wise equal to each other on all but $d$ coordinates. between these exponential number of patterns is not too small. In order to prove such a result,
we will exploit the expansion properties of the bipartite graph $W$; our sufficient condition will be in terms of a lower bound on the parameters of the expander graph.
Theorem 9.
Let $W$ be a $(d_{p},d_{c},n,m)-$regular bipartite graph, that is an $(\alpha n,\beta d_{p})$ expander. Let $\mathcal{X}$ be the set of patterns corresponding to the expander weight matrix $W$. If
$$\beta>\frac{1}{2}+\frac{1}{4d_{p}},$$
then the minimum distance between the patterns is at least $\lfloor\alpha n\rfloor+1$.
Proof.
Let $d$ be less than $\alpha n$, and $W_{i}$ denote the $i^{th}$
column of $W$. If two patterns are at Hamming distance $d$ from each
other, then there exist non-zero integers $c_{1},c_{2},\dots,c_{d}$ such that
$$c_{1}W_{i_{1}}+c_{2}W_{i_{2}}+\cdots+c_{d}W_{i_{d}}=0,$$
(30)
where $i_{1},\dots,i_{d}$ are distinct integers between $1$ and $n$. Let $\mathcal{P}$ denote any set of pattern nodes of the graph represented by $W$, with $|\mathcal{P}|=d$. As in [32], we
divide $\mathcal{N}(\mathcal{P})$ into two disjoint sets: $\mathcal{N}_{unique}(\mathcal{P})$ is the set of nodes in $\mathcal{N}(\mathcal{P})$ that are connected to only one edge emanating from $\mathcal{P}$, and
$\mathcal{N}_{shared}(\mathcal{P})$ comprises the remaining nodes of $\mathcal{N}(\mathcal{P})$ that are connected to more than one edge emanating from $\mathcal{P}$. If we show that $|\mathcal{N}_{unique}(\mathcal{P})|>0$
for all $\mathcal{P}$ with $|\mathcal{P}|=d$, then (30) cannot hold, allowing us to conclude that no two patterns with distance $d$ exist. Using the arguments in [32, Lemma
1], we obtain that
$$|\mathcal{N}_{unique}(\mathcal{P})|>2d_{p}|\mathcal{P}|\left(\beta-\frac{1}{2}%
\right).$$
Hence no two patterns with distance $d$ exist if
$$2d_{p}d\left(\beta-\frac{1}{2}\right)>1\Leftrightarrow\beta>\frac{1}{2}+\frac{%
1}{2d_{p}d}.$$
By choosing $\beta>\frac{1}{2}+\frac{1}{4d_{p}}$, we can hence ensure that the minimum distance between patterns is at least $\lfloor\alpha n\rfloor+1$.
∎
C-D Choice of Parameters
In order to put together the results of the previous two subsections and obtain a neural associative scheme that stores an exponential number of patterns and is capable of error correction,
we need to carefully choose the various relevant parameters. We summarize some design principles below.
•
From Theorems 6 and 9, the choice of $\beta$ depends on $d_{p}$, according to
$\frac{1}{2}+\frac{1}{4d_{p}}<\beta<1-\frac{1}{d_{p}}$.
•
Choose $d_{c},Q,\upsilon,\gamma$ so that Theorem 4 yields an exponential number of patterns.
•
For a fixed $\alpha$, $n$ has to be chosen large enough so that
an $(\alpha n,\beta d_{p})$ expander exists according to
Theorem 6, with $\beta\geq 3/4$ and so that $\alpha n/2\geq e_{\min}=2$.
Once we choose a judicious set of parameters according to the above requirements, we
have a neural associative memory that is guaranteed to recall an exponential number of patterns even if the input is corrupted by errors in two coordinates. Our simulation results will
reveal that a greater number of errors can be corrected in practice.
References
[1]
J. J. Hopfield, “Neural networks and physical systems with emergent collective
computational abilities,” Proc. Natl. Acad. Sci. U.S.A., vol. 79,
no. 8, pp. 2554–2558, 1982.
[2]
S. S. Venkatesh and D. Psaltis, “Linear and logarithmic capacities in
associative neural networks,” IEEE Trans. Inf. Theor., vol. 35,
no. 3, pp. 558–568, Sep. 1989.
[3]
S. Jankowski, A. Lozowski, and J. M. Zurada, “Complex-valued multistate neural
associative memory.” IEEE Trans. Neural Netw. Learning Syst., vol. 7,
no. 6, pp. 1491–1496, 1996.
[4]
M. K. Muezzinoglu, C. Guzelis, and J. M. Zurada, “A new design method for the
complex-valued multistate hopfield associative memory,” IEEE Trans.
Neur. Netw., vol. 14, no. 4, pp. 891–899, Jul. 2003.
[5]
A. H. Salavati, K. R. Kumar, M. A. Shokrollahi, and W. Gerstnery, “Neural
pre-coding increases the pattern retrieval capacity of hopfield and
bidirectional associative memories,” in Proc. IEEE Int. Symp. Inf.
Theor. (ISIT), 2011, pp. 850–854.
[6]
V. Gripon and C. Berrou, “Sparse neural networks with large learning
diversity,” IEEE Trans. Neur. Netw., vol. 22, no. 7, pp. 1087–1096,
Jul. 2011.
[7]
T. Richardson and R. Urbanke, Modern Coding Theory. New York, NY, USA: Cambridge University Press, 2008.
[8]
C. E. Shannon, “A mathematical theory of communication,” Bell system
technical journal, vol. 27, no. 379, pp. 948–958, 1948.
[9]
R. J. McEliece, E. C. Posner, E. R. Rodemich, and S. S. Venkatesh, “The
capacity of the hopfield associative memory,” IEEE Trans. Inf.
Theor., vol. 33, no. 4, pp. 461–482, Jul. 1987.
[10]
K. R. Kumar, A. H. Salavati, and M. A. Shokrollah, “Exponential pattern
retrieval capacity with non-binary associative memory,” in IEEE Inf.
Theor. workshop (ITW), Oct 2011, pp. 80–84.
[11]
S. Hoory, N. Linial, A. Wigderson, and A. Overview, “Expander graphs and their
applications,” Bull. Amer. Math. Soc. (N.S, vol. 43, no. 4, pp.
439–561, 2006.
[12]
D. O. Hebb, The Organization of Behavior: A Neuropsychological
Theory. New York: Wiley $\&$ Sons,
1949.
[13]
D. J. Amit, H. Gutfreund, and H. Sompolinsky, “Storing infinite numbers of
patterns in a spin-glass model of neural networks,” Phys. Rev. Lett.,
vol. 55, pp. 1530–1533, Sep 1985.
[14]
J. Hertz, R. G. Palmer, and A. S. Krogh, Introduction to the Theory of
Neural Computation, 1st ed. Boston,
MA, USA: Addison-Wesley Longman Publishing Co., Inc., 1991.
[15]
J. Komlós and R. Paturi, “Effect of connectivity in an associative memory
model,” J. Comput. Syst. Sci., vol. 47, no. 2, pp. 350–373, 1993.
[16]
J. J. Hopfield, “Neurons with graded response have collective computational
properties like those of two-state neurons,” Proc. Natl. Acad. Sci.
U.S.A., vol. 81, no. 10, pp. 3088–3092, 1984.
[17]
D. Prados and S. Kak, “Non-binary neural networks,” in Advances in
Computing and Control, ser. Lecture Notes in Control and Information
Sciences, W. Porter, S. Kak, and J. Aravena, Eds. Springer Berlin Heidelberg, 1989, vol. 130, pp. 97–104.
[18]
D.-L. Lee, “Improvements of complex-valued Hopfield associative memory by
using generalized projection rules.” IEEE Trans. Neural Netw.,
vol. 17, no. 5, pp. 1341–1347, 2006.
[19]
C. Berrou and V. Gripon, “Coded Hopfield networks,” in IEEE Int.
Symp. Turbo Codes & Iterative Information Processing (ISTC), Sep 2010, pp.
1–5.
[20]
R. H. Gold, “Optimal binary sequences for spread spectrum multiplexing,”
IEEE Trans. Inf. Theor., vol. 13, no. 4, pp. 619–621, Sep. 1967.
[21]
P. Peretto and J. J. Niez, “Long term memory storage capacity of
multiconnected neural networks,” Biol. Cybern., vol. 54, no. 1, pp.
53–64, May 1986.
[22]
A. H. Salavati and A. Karbasi, “Multi-level error-resilient neural networks,”
in Proc. IEEE Int. Symp. Inf. Theor. (ISIT), Jul 2012, pp. 1064–1068.
[23]
A. Karbasi, A. H. Salavati, and A. Shokrollahi, “Iterative learning and
denoising in convolutional neural associative memories,” in Proc. Int.
Conf. on Machine Learning (ICML), ser. ICML ’13, Jun. 2013, to appear.
[24]
L. Xu, A. Krzyzak, and E. Oja, “Neural nets for dual subspace pattern
recognition method,” Int. J. Neural Syst., vol. 2, no. 3, pp.
169–184, 1991.
[25]
E. Oja and T. Kohonen, “The subspace learning algorithm as a formalism for
pattern recognition and neural networks,” in IEEE Int. Conf. Neur.
Netw., vol. 1, Jul 1988, pp. 277–284.
[26]
E. J. Candès and T. Tao, “Near-optimal signal recovery from random
projections: Universal encoding strategies?” IEEE Trans. Inform.
Theor., vol. 52, no. 12, pp. 5406–5425, 2006.
[27]
D. L. Donoho, A. Maleki, and A. Montanari, “Message-passing algorithms
for compressed sensing,” Proc. Nat. Acad. Sci. U.S.A.,
vol. 106, no. 45, pp. 18 914–18 919, 2009.
[28]
J. A. Tropp and S. J. Wright, “Computational methods for sparse solution of
linear inverse problems,” Proceedings of the IEEE, vol. 98, no. 6,
pp. 948–958, 2010.
[29]
E. Oja and J. Karhunen, “On stochastic approximation of the eigenvectors and
eigenvalues of the expectation of a random matrix,” Math. Analysis and
Applications, vol. 106, pp. 69–84, 1985.
[30]
L. Bottou, “Online algorithms and stochastic approximations,” in Online
Learning and Neural Networks, D. Saad, Ed. Cambridge University Press, 1998.
[31]
M. Sipser and D. A. Spielman, “Expander codes,” IEEE Trans. Inf.
Theor., vol. 42, pp. 1710–1722, 1996.
[32]
S. Jafarpour, W. Xu, B. Hassibi, and A. R. Calderbank, “Efficient and robust
compressed sensing using optimized expander graphs,” IEEE Transactions
on Information Theory, vol. 55, no. 9, pp. 4299–4308, 2009.
[33]
D. Burshtein and G. Miller, “Expander graph arguments for message-passing
algorithms,” IEEE Trans. Inf. Theor., vol. 47, no. 2, pp. 782–790,
Sep. 2001. |
"Interventional Few-Shot Learning\nZhongqi Yue1,3\n Hanwang Zhang1\n Qianru Sun2\n Xian-She(...TRUNCATED) |
"A short review and primer on using video for psychophysiological observations in human-computer int(...TRUNCATED) |
"A simple numerical method of second and third orders convergence for solving a fully third order no(...TRUNCATED) |
"The questionable impact of population-wide public testing in reducing\nSARS-CoV-2 infection prevale(...TRUNCATED) |
"hep-ph/0107256\nSNS-PH/01-10\nNovember 19, 2020\nSN1a data and the CMB of Modified Curvature at Sho(...TRUNCATED) |
"Structure analysis of interstellar clouds: II. Applying the\n$\\Delta$-variance method to interstel(...TRUNCATED) |
"E${}_{6}$ inspired SUSY models with Custodial Symmetry\nR. Nevzorov${}^{*}$\nAlikhanov Institute fo(...TRUNCATED) |
End of preview. Expand
in Dataset Viewer.
- Downloads last month
- 429