text
stringlengths
0
634k
JESUS: A QUESTION OF IDENTITY. By J. L. Houlden. London/New York: Continuum, 2006. Pp. vii + 136. $19.95, ISBN: 9780826489418. At a time when the print run of new books seems to expire almost while the ink is still drying on the copies as they first arrive on the shelf, any volume that is still being republished fourteen years after its initial appearance probably ought to deservedly attract our attention. In this primer, which grew out of lectures given at King’s College, University of London, J. Leslie Houlden, Emeritus Professor of Theology at King’s College, cogently interweaves together history, biblical studies, theology and apologetics in an effort to explore what we can know about Jesus. While not shying away from some of the perennial ‘problems’ and tensions involved in such a quest, Houlden, with eloquence, humility and non-technical style, invites his readers to engage seriously with the question of Jesus’ identity, not only as a Galilean carpenter’s son, but as God’s; as not merely the object of cool enquiry but as the subject and centre of living faith. He asks: ‘What are we now to make of Jesus, both as a historical figure and as involved with belief?’ (pp. 8–9). Houlden is acutely aware that with the history of Jesus, both as recorded in the centuries following his death, and its subsequent developments, we have to do with interpreted history. ‘In this sense’, he writes, ‘theology takes precedence over history in the Christian story’ (p. 11). ‘The Gospels’, he contends, ‘are slanted. They were not written to answer our modern questions, about the order of events, causality and psychological awareness, but to commend faith’ (pp. 42–3). That is why Houlden turns first to Paul, and then the Gospels, while properly steering clear of driving any wedge between the Christ of faith and the Jesus of history. He is well aware throughout the essay that the modern ‘quest for a neutral view of Jesus and of Christian origins, one fully and solely evidenced from “the facts” (for example, from the Jewish context of his life), is a chimera’ (p. 124). He characterises the historians’ task thus: The historian’s assessment has to steer a careful course: between seeing Jesus as so distinctive that he makes no sense in the context of his times and seeing him as so ordinary, so thoroughly part of his background, that the massive and speedy effects of his life become incomprehensible. Two extremes are unlikely: on the one hand, that our accounts of Jesus are wholly shaped by faith and that in reality he was nothing very remarkable; and on the other hand, that the accounts owe nothing to faith and that all happened and was said exactly as told. What is hard is to know at what point between the extremes truth lies. (pp. 53–4) Tracing the story of Jesus – and the ‘vast yet specific tradition’ (p. 111) that pertains to him – as interpreted from the first century through to the early ecumenical councils, from Pliny and Ignatius of Antioch to Aberlard and Julian of Norwich, from John of the Cross and Aquinas to Schleiermacher and Schweitzer, from Reimarus and Strauss to Hengel and Sanders, from Kant, Tillich and Cupitt to Bonhoeffer, Barth and Moltmann, Houlden offers us a portrait of Jesus impressed with the wrestle marks of the Christian community. But, as Houlden insists, no matter which of the many different postures about Jesus one adopts, in order to be ‘meaningful’, Jesus cannot be coolly and disengagedly observed from a distance: ‘Jesus must be (at least) my saviour: in that sense subjectivity has to be part of the picture. We are concerned with a religion, at whose heart he stands, not in the first instance a theory, which must be consistent if it is to be satisfactory’ (p. 113). Houlden possesses a gift all too rare among Christian theologians and biblical scholars – the ability to harness the breadth of the church’s thinking regarding its Lord and communicate it in a way that is palatable, uncondescending and clear to a readership still finding its footing both inside and outside of the church and the academy. While some readers may wish to question some of Houlden’s presuppositions regarding the dating of divine recognition among Jesus’ first disciples, for example, and not all will follow all of Houlden’s theological conclusions, or perhaps even the route taken itself, his essay remains both informed and constructive, suitably identifies many of the important issues at stake, avoids most of the usual pitfalls, and provides us with some direction for how we might proceed. To this end, the volume includes – in addition to an index – a helpful list of suggestions for further reading linked with each chapter. While Houlden’s opuscule is intended for the enquiring lay person – both ‘sceptics and enthusiastic believers’ (p. 118) – who wishes to ‘understand more about Jesus as a historical figure and as the object of devotion and faith’ (p. vii), it will not fail to educate and inform those more conversant with the technical issues at stake not only in the life and ministry of Jesus but also how that life and ministry touches our life and that of our multi-faith world. A commendable contribution to an ever-growing library of Jesus studies.
Can you remember your first experiences of money? Perhaps it was playing with toy tills and coins? Or maybe when you lost your first tooth and the tooth fairy paid you a visit? For me I remember being given coins to pay for ice cream on the way home from school. Our learning and experiences with money usually start at really young age. In fact, studies have shown that our money habits can actually already be formed by the time we’re just seven years old! So how can we make sure we help our kids develop great money habits? 1. Don’t wait Given children are able to understand the basics of money from an early age, there’s no reason to wait. Start teaching kids the basic concepts as soon as possible. This is easily done through play. Using coins is a good way to start as it helps kids to understand different values, as well as being a great way to help with basic maths skills. 2. You have to earn it Understanding that money is earned rather than just given is a great way to help kids understand how the world works in regards to money. One way of doing this is to give children age appropriate chores to do in exchange for pocket money. 3. Show how to budget Once kids have money of their own, you can then teach them how to budget. They can choose whether to spend their money, save up for something, or share it. This can help them learn patience and about the value of saving for something more expensive they really want and also the opportunity to learn about giving and develop a broader worldview. 4. Goal setting Being able to manage their own money can also help children learn about goal setting and delayed gratification. Kids naturally want things immediately, so being able to sit down with them and work out how long it will take them to be able to afford what they’d like can be really important when it comes to shaping good relationships with money in the future. 5. Getting good value A good way to help kids learn to use money responsibly is to teach them how to be a savvy spender. When you take them shopping, show them how you search and work out what is a good deal or not. Show them how you look for discounts and deals to make money stretch further and how to get the best value. This will help them learn how to use money wisely when spending. 6. Give them autonomy As humans we tend to learn best from experience, and this is particularly true for kids as they develop. Letting them make their own decisions with their money is a great way to help them learn. Give them advice and guidance, but ultimately, letting them decide can be really helpful as they learn. One of the best ways we learn is from experience, and that’s especially true for children as they grow and learn everyday. 7. Be a good rolemodel Finally, children learn so much just from watching us as parents, so the best way we can help them to be good with money is by demonstrating with how we are with money ourselves.
Five ways to jumpstart operations in the next normal. By Katy George In the wake of radical and rapid disruptions from Covid-19, organizations have a window of opportunity to rewrite and transform their entire operations strategies The coronavirus pandemic has challenged supply and demand norms across sectors, and the speed of disruption exposed points of weakness and fragility in global supply chains and service networks. Yet at the same time, the crisis forced operations teams to achieve long-term ambitions that would have been considered impossible before the virus. Leading retailers boosted ecommerce capabilities virtually overnight to deliver food to millions of customers confined to their homes. One European healthcare provider jettisoned its two-year plan for the rollout of e-health services so that in only ten days it could deploy a new, remote-treatment system to thousands of patients. These are just two examples among many that show how companies took quick action to adapt, achieving new levels of visibility, agility, productivity, and end-to-end customer connectivity—while preserving cash. Let no learning go to waste. Many business leaders are looking for ways to embed what they have discovered during the Covid -19 crisis, and they’re now aspiring to create a new kind of operational performance, one where increased innovation enables agility, and agility creates resilience – and at lower cost. As response efforts converge with the ambition to transform, our ongoing work and discussions with leaders in multiple industries suggest that five themes will shape resilient and reimagined operations on the other side of Covid -19. Building operations resilience Successful companies will redesign their operations and supply chains to protect against potential shocks. More companies will set up dedicated supply-chain risk-management functions, working alongside the manufacturing, procurement, and supply-chain functions. The resulting actions may involve accelerating decentralization, deploying inventory closer to customers, and developing crisis-response plans and capabilities. Companies will also revisit their global asset footprint. The once-prevalent global-sourcing model in product-driven value chains has steadily declined as new technologies and consumer demand patterns encourage regionalization of supply chains. The trend is likely to accelerate, as companies reassess the risks of globally integrated asset networks and supply chains. Services may follow a similar pattern, with providers emphasizing regional operations, slowing the last decade’s growth in global services trade. To win in the next-normal environment, companies will need to achieve this step-change in resilience without unsustainable increases in their costs. Accelerating end-to-end value-chain digitization A lot of what had been done to deliver on visibility was based on algorithms – but even algorithms cannot help predict an unprecedented phenomenon. Accelerating end-to-end operations digitization will be critical in resolving the long-standing trade-off between efficiency and resilience, and competitiveness will be based on technology. Those organizations who previously invested in end-to-end visibility of supply, inventory and demand were much better prepared to accommodate the significant changes the crisis brought to each of those areas. Going forward, this will likely change the way companies are working, with daily decisions and much tighter alignment between operations and the commercial/sales functions. In another example, many companies were able to continue production and delivery to customers by automating processes or developing self-service systems. These approaches can accelerate workflows and reduce errors in the short term, and when applied end-to-end, they can transform the customer experience and significantly boost enterprise value. For example, in call centers, the application of robotic process automation (RPA) for back-office and invoicing tasks can free up agents to deal with complex queries, areas where they could add the most value. The crisis demonstrated again that low-cost, high-flexibility operations are not only possible – they are happening and they are beneficial. Research by the World Economic Forum, in collaboration with McKinsey, shows that companies often achieve significant and simultaneous improvements across multiple performance measures when they integrate advanced digital technologies across the value chain. Digital approaches can transform customer experience and significantly boost enterprise value when applied end to end. Rapidly increasing capital – and operating-expense transparency To survive and thrive amidst the economic fallout, companies can build their ‘next-normal’ operations around a revamped approach to spending that enables a different cost structure. And they will need to make these changes quickly. Organizations can begin with an in-depth review of their operating costs. Technology-enabled methodologies can significantly accelerate cost-transparency work, compressing months of effort into weeks or days. These digital approaches include procurement-spending analysis and clean-sheeting, end-to-end inventory rebalancing, and capital-spend diagnostics and portfolio rationalization. Operations functions can also play a central role in companies’ cash- and liquidity-management activities. Optimizing an organization’s cash position in the potentially volatile post-crisis environment will require companies to increase the visibility not only of their own cost structures, but also those of their suppliers. Leading organizations are adopting increasingly sophisticated techniques in their capital planning, assessing each project’s return on investment against multiple scenarios, and continually reviewing their capital-project portfolios. This is a unique moment where companies likely won’t face the same trade-offs between flexibility and cost that they did in the past. Driving the ‘future of work’ The future of work – where all people in every industry use digital technologies, data and analytics in new ways to perform their existing jobs – was a change that was already underway. With Covid-19 upending the way work is done, employees across all functions have learned how to complete tasks remotely, using digital communication and collaboration tools. In operations, this means future of work trends will accelerate, with a marked reduction in manual and repetitive roles and an increase in the need for analytical and technical skills. This shift will therefore require an unprecedented wave of reskilling in operations roles, and organizations will need to ramp up their reskilling efforts significantly to redeploy talent at speed and scale. For example, some companies have set up internal training academies focused on specialized skills by using a combination ad of e-learning, classroom training, and on-the-job coaching. In tandem with reskilling, companies may adapt their operating models to manage physically distributed operations teams, with staff on the ground in local markets able to draw upon the expertise of specialist colleagues who provide support remotely via digital connectivity tools. Reimagining a sustainable operations competitive advantage Operations can play an essential role in creating lasting competitive advantage and in meeting environmental and social-responsibility goals. We are already seeing multiple ways in which organizations are responding to these opportunities – informed by customer insights, some companies will reinvent themselves entirely in the coming years, focusing on specific technologies or market niches, or by changing their relationship to their end-customers and intermediaries. Others will transform the way they develop products, using agile processes and digital links to improve their connection with customers. Still others are adopting manufacturing technologies and supply-chain arrangements to consume less material, use less energy, and generate less waste. Importantly, these changes won’t just apply to individual organizations, instead, entirely new ecosystems will emerge that include suppliers and adjacent industry players to collectively shift into the next normal. With the likelihood of prolonged uncertainty over supply, demand, and the availability of resources, Covid-19 may be the trigger for operations functions to adopt an agile approach to transformation. As companies transition to the next normal, they can retain these powerful and effective approaches and structures, which have helped many organizations achieve unprecedented visibility and cross-functional agility in their operations, rather than dismantle them once the crisis has passed.
The Obama administration could disable one of the most powerful sources of wealth and income inequality in our country. Yet no one ever mentions it as a possibility. Don’t they know? There have always been disparities in income, of course, but the gap now isn’t so much between the rich and poor as between the super-rich and everyone else. There’s no more dramatic example of this than the astronomical income of hedge fund managers and traders, and the special tax break that allows them to pocket so much of that income. The most remarkable aspect of this fabulous new wealth is that much of it is simply a gift from the government—a gift that could be eliminated by the stroke of a pen. A quick refresher on how the tax break works: An executive salary is taxed at 39 percent, the highest rate for earned income. But the income of a hedge fund manager is taxed at only 20 percent, which is the highest long-range capital gains rate—even though the hedge fund manager is deriving income that’s as directly earned as the wages of a steelworker. He made money when he established his fund. But the rule for hedge funds is that the “realization event”—when income is realized—is not known until later, so “carried interest” turns the income into a capital gain. This is what’s known as a legal fiction, which means that it’s a great big lie. Warren Buffett won a lot of praise by pointing out that his secretary pays taxes at a higher rate than he does. Actually, Buffett’s 20 percent rate is not due to the hedge fund loophole, but presumably because the vast majority of his income comes from dividends or capital gains, which are taxed at the lower rate because the corporation has already paid taxes on that income when it was first earned. Unlike the directly earned income of hedge fund executives, the “same” money is being taxed twice. Buffett’s point is well-meaning, but it’s in fact more appropriate when applied to the special tax privileges enjoyed by hedge fund operators. There is no justification for taxing hedge fund managers’ income as capital gains. So how could Congress have done such a thing? Here’s the secret: Congress didn’t do it. There are a lot of special deals that the federal legislature has snuck into law, but this isn’t one of them. The hedge fund tax loophole wasn’t a bill that was passed into law; it was never voted on. It was part of a revenue procedure issued by the Internal Revenue Service in 1993, before there was any such thing as a hedge fund. According to Alan Wilensky, who in 1992 and 1993 was the acting assistant secretary of the Treasury for tax policy, the revenue procedure intended to clarify the tax treatment of some real estate investments. It defined a “realization event” as when an interest in a piece of real estate was sold or exchanged; if the event happened more than year after the real estate had been acquired, then the tax would be considered on a long-term capital gain. Later, when hedge funds started to come into being, the IRS applied the same revenue procedure to this completely different new financial device. No new ruling was issued—the old one was just assumed to cover it. This may have been understandable given the small number of hedge funds at the time, but by the end of the 1990s, hedge funds had become a wildly profitable and highly visible part of Wall Street. Robert Rubin, then Treasury secretary, could have asked for a new revenue ruling but chose not to address the new landscape. And there still has been no hedge fund–specific ruling on this matter. That’s why hedge funds could lose their tax advantage if only the IRS issued a new ruling saying so—no act of Congress required. The IRS is part of the Department of the Treasury. The secretary of the Treasury is appointed by the president and is part of his Cabinet. So the outrageous exemption from fair taxation could be ended right now if only President Obama would pick up the phone and order it done. He’s the one who’s been claiming that administrative remedies are just fine, despite the presence of the Congress: When Congress couldn’t move on the minimum wage bill, the president applied a new higher wage to federal contractors—and he did this administratively. The guidelines to cut carbon pollution are likewise administrative. So: Go for it, Mr. President. With one call you could bring enormous new revenues into the Treasury each year, and, not incidentally, cut the price of Picassos in half, to say nothing of the effect on $80 million condos. What swift action by the president will not do is impoverish hedge fund managers. Paying the same tax rates as their accountants will not clean out the hedge fund kings. The 25 highest-earning hedge fund managers and traders made a combined $24.3 billion in 2013, according to Nathan Vardi in Forbes.
The evidence clearly indicates that climate change, once considered a problem for future generations, affects DC residents and businesses today. In recent years, we have seen how climate change is already impacting us with record-breaking heat waves and snowstorms, flooding caused by rising sea levels and heavy rains, and the destructive 2012 derecho storm. These events are sobering reminders that without action, increasingly severe weather events will threaten to disrupt our power grid, harm our economy, and cost lives. Climate Ready DC is the city's strategy to make DC more resilient to future climate change. DC's investments in expanding our tree canopy, managing stormwater, and greening our construction codes are helping us to prepare for hotter summers and heavier rains. Our programs to save energy and install solar energy are also helping to make our power system more durable. But, we have much more work to do to ensure that all residents are protected, and we cannot do it alone. Achieving our commitment to climate resiliency by 2050 requires ambitious action today—we must make forward-looking choices and targeted investments while monitoring our progress now and in the coming decades.
Can you ski in Denver in December? The most popular season passes among Denver locals are the Epic Pass (covers Vail, Beaver Creek, Breckenridge, Arapahoe Basin and Keystone) or the Rocky Mountain Super Pass (covers Winter Park, Copper Mountain, Steamboat, and Monarch).Jan 15, 2021 Does Colorado have snow? Colorado weather can change drastically over a day, and can see all weather conditions throughout the year. ... It has snowed each month of the year in Colorado, but the snow is mainly during the months of late October - late April. Snow is usually heavier and wetter (more moisture) in the spring time than winter. Can you ski in Denver in December? There might not be a huge amount of snowfall, but generally speaking, December is a beautiful time of year to go skiing in Colorado. It's festive, and many of the resorts really go to town with their Christmas decorations! Does it snow in Denver in December? What is Denver typically like in December, according to the National Weather Service? From 1882-2020, the average December snowfall is 8 inches. From 1991-2020, the average temperature for December is 31.2 degrees.Dec 8, 2021 How cold does it get in Colorado in the winter? And average temperatures in Colorado in the winter are between 16°- 54°F, with colder temperatures the further you go into the mountains. Aside from avalanches and rock slides, Colorado does experience other natural disasters every year.Oct 5, 2021 Is Colorado fun in the winter? Colorado's sun-soaked skies, powder-filled valleys and snow-capped peaks make it a winter wonderland filled with fun things to do, like snowmobiling, tubing, ice skating, snowshoeing, soaking in hot-springs pools and much more. How bad is winter in Denver? Denver winters are actually quite mild. While it can get decidedly chilly sometimes, overall temperatures during the winter months are actually pretty moderate. “Even the coldest month, December, has an average daily high temperature of 45 degrees, and days reaching 60 degrees are fairly common,” Wagner says.Oct 15, 2018 Is it safe to drive in Colorado in December? Advice for Snowy and Icy Driving Conditions in Colorado. It doesn't take long for conditions to become treacherous in states that have winter weather for part of the year. Roads can get slick and icy fast, making driving a challenge for even the most seasoned driver.Nov 5, 2021 What does Colorado look like in December? The month of December in Denver experiences essentially constant cloud cover, with the percentage of time that the sky is overcast or mostly cloudy remaining about 41% throughout the month. The clearest day of the month is December 26, with clear, mostly clear, or partly cloudy conditions 60% of the time. - Here are some fun winter activities your teen can do if you live in a cold climate: Go ice skating. Go sledding. Go downhill cross-country skiing. Build a snowman. Build a snow fort. Have a snowball fight. Go on a winter hike. What to do in Denver Colorado? - Wings Over the Rockies Air&Space Museum. What an interesting museum for all aircraft fans and their families. ... - Molly Brown House Museum. Molly (or Margaret as she was christened) was the brave socialite and activist who found fame when she survived the 1912 sinking of the legendary ... - Denver Climbing Company. ... - Coors Field. ... - Denver Botanic Gardens. ... How cold does Denver Colorado get in the winter? Weather & Climate Info Winters are mild with an average daily high temperature of 45 degrees Fahrenheit and days reaching 60 degrees are not uncommon. Snow doesn't stay on the ground long in Denver so golf courses and outdoor cafes are able to stay open all year. What is Denver like in January? January Weather in Denver Colorado, United States. Daily high temperatures are around 45°F, rarely falling below 28°F or exceeding 61°F. Daily low temperatures increase by 2°F, from 22°F to 24°F, rarely falling below 6°F or exceeding 37°F. Is Colorado Nice in January? you will love Colorado. January can be warm or cold, it is iffy for towns like Denver, but up in the mountains there will be snow for skiing, tubing, snowmobiling, snowboarding, & dog sleding. Is it good to go to Colorado in January? Colorado in the winter can get bitterly cold with heavy snow storms and freezing temperatures, however, on the flip side, blue skies and mild conditions are commonplace with January, February and March always popular months for ski resorts, such as Aspen. What are fun things to do in Denver CO? - The best things to do near Denver: Walk 16th Street Mall for a cultured taste of Colorado Avoid the crowds at Larimer County Square, Denver’s most historic block Grab a beer at one of Denver’s famous microbreweries Experience Colorado Ski Country USA Hike and enjoy live music at Red Rocks Amphitheater What are some unique things about Denver? - Interesting Facts about Denver. Interesting fact: Denver is one of the few cities in history that was not built on a road, railroad, lake, navigable river or body of water when it was founded. It just happened to be where the first few flakes of gold were found in 1858. 2. Fun fact: The first permanent structure in Denver was a saloon.
Wrinkles, fine lines, and various skin abnormalities are all common aspects of the aging process. When diet, exercise, and skin creams fail to achieve your desired results, laser facial treatment is an option you may want to consider exploring. Take a few moments to learn more about how laser facial treatments can revitalize your complexion to give you a younger look. How Does Laser Facial Treatment Work? Laser facial treatment relies on sophisticated technology to target many different skin problems. First, a local anesthetic is applied to sensitive areas to alleviate any discomfort. Next, a high-powered laser carefully removes the top layer of skin from targeted areas while simultaneously stimulating the dermis layer beneath. A laser facial treatment usually takes no more than a few hours to complete. As the skin heals, wrinkles, acne, or scars will become less prominent and may disappear altogether. The final result of laser facial treatment is a much smoother, healthier skin tone, giving you a more youthful appearance. What Skin Concerns Can Laser Facial Treatment Address? While certain skincare ointments can slowly help create better-looking skin, results can be unpredictable and daily application can be time-consuming. Laser facial treatments can resolve numerous cosmetic skin issues much faster and more effectively than many at-home remedies. Here are a few ways laser facial treatment can enhance the vitality of your skin: - Repair acne scars - Tighten wrinkles around eyes or mouth - Remove birth spots, lesions, or discoloration of the skin - Eliminate unwanted hair growth Advantages of Laser Facial Treatment Face creams that promise to create more beautiful skin often take a long time to begin working, and the results can be underwhelming. On the other hand, surgeries to improve skin tone are costly and recovery usually takes a considerable amount of time. For many, laser facial treatment is a better way to improve complexion quickly without the need for constant skin maintenance or complex invasive procedures. There are many unique benefits to laser facial treatment that other skin rejuvenation options don’t provide, including: - Fast and Simple Procedure: A laser facial treatment can take just a single afternoon. Laser treatments may be focused on specific areas or reinvigorate the entire face, depending on your needs. - Rapid Recovery: Most individuals can return to normal activities within a week or less. In contrast, many cosmetic surgeries take much more time to heal. - Lasting Results: Laser facial treatments improve the aesthetics of skin for the long term. After a single treatment, you’ll see a massive improvement in how your skin looks. - Safe: A laser facial treatment is a safe and non-invasive procedure. Laser treatment is an excellent option for those who would prefer to avoid surgery.
Contemporary knowing and changing the mind through ‘awareness’ based mental processes and a new level of consciousness All our existing mental processes like perception, cognition, emotions, feelings, observation, classification, analysis, planning, problem solving, inquiry, discovery and then Will or doing have one thing in common. They all arise out of the individual biological unit’s connection with objective reality, which is tangible to it through some form of perception or cognition. The process of ‘knowing’ and ‘changing’ the mind that we are proposing will be through a qualitatively different mental process of ‘awareness’, which is a type of knowing that is capable of arising without any tangible contact with reality. This means it is a process which is independent of normal sensory perception as it deals with mental states and functions which are not yet tangible for our existing sensory/perceptual processes, and where the stimulus is not biology but subjectivity. So it is an awareness of mental processes in relation to one’s subjectivity; it involves another level/form of ‘perception’ of mental processes, including cognition, observation, cogitation, thinking, knowing, inquiry, which is in relation to our subjective existence and not biology. It is this process of awareness which produces its own variety of understanding and consciousness. If contemporary individuals want to change those parts of their subjectivity which are intangible to their normal perceptual processes, then they need the above mentioned type of awareness. It is this awareness which can lay down the steps required for understanding the ‘unperceivable’ mental processes. We need to therefore combine this new awareness with the existing perception based awareness and understanding and learn to employ this composite process for the intelligent ‘knowing’ and ‘changing’ of our minds. The process of intelligent change through intelligently made mental processes will be initiated and based on a new need (essentially a nonverbal process), which will arise out of this new type of awareness and not the perception based awareness and understanding that is mostly verbal and largely a randomly arrived at mental process. The reason being that the nature and complexity of the steps of changing through intelligent mental processes is such that they will need to be undertaken under the guidance and management of this new type of awareness. And that is also why a need which arises out of the perception based awareness and understanding cannot produce the necessary mental processes required for intelligently changing the mind. More so because existing needs are already fully occupied with perpetuating the existing mind and subjectivity and their primacy. Up to now the changing of mental processes through intentional human action has largely been a random process relying primarily on perception based understanding and awareness. This was principally acquired through and was dependent upon external laboratory tools and methods, topped off with some insights from both ad-hoc or designed introspective, metacognitive and other first person experiences. Of course individuals have tried to change themselves through their awareness based understanding and thinking also but, that has largely been an ad-hoc and peripheral phenomenon. What we are proposing is a more developed and mature form of this awareness based thinking and not what it has been so far. If the control and management of the process of mental change comes in the hands of the new mental processes generated by a nonverbal need arising out of this new accumulating awareness, then the hitherto random process of changing mental processes will no longer remain dominant. It will become a part of the overall design of the new intelligent process of change. Roots of ‘awareness’ based mental processes The antecedents of the new type of developed awareness that we are proposing lie in the process of awareness that emerged after mature language and civilization. This happened when a bifurcation emerged in the thinking process which operated predominantly in relation to external phenomena and objects. This new strand of thinking was about processes like feelings, ideas, etc., which had happened inside one’s head. In case of the earlier strand the problem was centered outside of oneself and defined by the outside thing or phenomenon but in this form of thinking the problem was centered and defined within the mind. The reason why we are tying this new thinking to mature language in the period of civilization is that a recording of the experience of what one has been thinking and feeling is only possible with words. Without words one cannot record that experience. We and other living things can experience the outside world and its innumerable phenomena without words but they cannot be recalled and then developed without words. This new branch of thinking is actually an extra practical or non-practical thinking; disconnected from our practical agenda largely connected to outside world. It takes into account those phenomena and issues which do not have any practical bearing on an individual’s practical agenda. Logically the causation and origins of this thinking can only be in an individual’s emerging awareness. The desire to know more than one’s practical needs can only have its origins in an awareness of the way one’s mind is working (both in thinking and feelings) in relation to the rest of one’s life. The questions which start the process of speculative thinking as opposed to agenda oriented thinking have to have their starting point in one’s awareness of one’s ideas and feelings in general and not some practical need. Because the inquiry which arises out of practical need is limited and circumscribed by the need while speculative inquiry has no such limitations; it can arise out of both thinking and feelings i.e. there can be intellectual and emotional inquiries. thinking arising out of a developed awareness has been growing since the last five thousand years or so. We can see it especially in the works of early philosophers, poets, thinkers, etc. It drives the entire lives of individuals who occupy themselves with it. Of course the majority of the human race have occupied themselves with practical agenda oriented thinking and not awareness based thinking hence it has been a marginal phenomenon in human history. But despite being marginal it has been strong and has consistently grown and developed with time. Consequently, one is justified in recognizing it as a human mental faculty and function distinct and separate from ordinary agenda oriented thinking. We are proposing that this new awareness, which started off by being one application and function of our mental processes, has after developing and accumulating over centuries reached a stage where it can be used as a tool in relation to problems that arise within ourselves. It can be used as a tool to think about specialized micro and macro problems of understanding, change, correction, modification, etc., pertaining to our mental processes as a whole. We can move from awareness to designer thinking when we confront any specific issue in relation to any mental state, function or aspect of some mental process. Which means instead of accidently discovering and then dealing with it we can proceed through a deliberate regular process of questioning and reasoning in order to understand and change it. This is how that awareness becomes a tool and another form of intelligence. The current information and knowledge that we are getting about the mind from disciplines like Neurophysiology, Neurology, Cognitive Science, Neuroscience, Psychology, etc., is one level of intelligence. There is a flood of information about mental functions, states, consciousness, brain functions and anatomy and their correlates but not about the functioning of brain as a complex living mental system. We do not at present have an integrated deeper knowledge of the mechanics of its formation, its energy constituents and functioning, its evolution, its core design-hierarchy, interconnections and interactions-and how it can be intelligently modified and engineered through mental tools and processes. Viewing the mind in this manner and for this purpose is way beyond the jurisdiction of contemporary science and philosophy, which are focused mainly on creating a plausible explanatory science of consciousness and mind. Today we can scan and make all kinds of photographs of the brain and its various components and parts but not of the living system of mental functions. There is no hologram, echogram, CT Scan or MRI of the mental system produced by the brain. And this mental system includes emotional and feeling processes. Contemporary scientific investigations of the brain and mind are not yet within miles of this level of observation and knowing of mental functions as a living system. All attempts so far of becoming intelligent about our mental system fall short of this goal, whether its laboratory sciences or Psychology, Psychiatry, communications or even any mystical practices and traditions. In this situation an indirect but comprehensive intelligence about the mental system has to be developed. And that cannot happen with just talking about oneself; some of our motives, feelings, emotions, ideas, habits, etc., with others. Because we know that this discussion only illuminates certain limited areas, subject to many conditions and reservations and with little effect and is a far cry from being intelligent about mental functions as a distinct system. If such an intelligence of mental processes is possible then there must be evidence of the role of this intelligence qua the mental system. Which means how this intelligence arising out of accumulated awareness and knowledge actually works with respect to mental processes. Does some difference occur in mental processes from its role? A difference or change which cannot be explained away in terms of what exists in current human experience, i.e. the changes resulting from some emotional shock or trauma (both good and bad). Such changes cannot be ascribed to the role of intelligence as they are unintelligent natural responses. This special intelligence arises out of or is the fruit of this body of thought and understanding about mental processes. And when it is applied to mental functions as a system then it gives rise to a new enabling experience of consciousness. Rethinking and Redefining the term ‘Consciousness’ There is a need to make a clear distinction between the existing meaning of the term ‘consciousness’ as it is used generally and in scientific inquiry, and the new specific meaning that we are ascribing to it. Broadly speaking, ‘Consciousness’ is presently viewed as an awareness of oneself in terms of experiencing oneself, which is also present in animals. The beginnings of simple awareness (of the form, its needs and outside reality) in living forms can be found even in the single-celled paramecium. This awareness includes experience, perception, and a simple programmed mental response. It is essentially an internal subjective process which is based on the individual specimen’s experience of itself and its biological needs through its emotive process, where its pain pleasure programme resides. In more developed pre-verbal life forms, including pre-verbal human beings, the process of intelligence or problem solving emerges as distinct from and an addition to the programmed response. After the advent of language, we find thinking and a more developed intelligence or ‘consciousness’ coming into the picture. We can better identify the post-verbal form of consciousness with reference to unconsciousness, which implies an involuntary and unevaluated programmed response. So it would generally mean not only a response to perception but a degree of intelligent understanding of what has been perceived, be it a mental process or something outside of it. It therefore denotes not more thinking but thinking based upon deeper understanding of any particular matter or phenomenon. In our time we would like to ascribe another specific meaning to this term so as to separate it from the normal process of thinking. The normal thinking process is a cogitative process which is mainly undertaken in relation to tangible external phenomena. The ‘Consciousness’ we are referring to involves first of all systematic thinking about mental processes and then the end product or conclusion of that thinking. It is the ‘understanding’ or ‘knowing’ that arises out of a systematic and on-going process of holistic and deeper thinking about mental processes and not assumed knowing arrived at through means like yoga, meditation, psychological exercises or some pseudo religious methods. So it potentially represents a highly organized contemporary intellectual process applied to a crucially important area of inquiry, which is not as yet fully in the grasp of Science. ‘Consciousness’ is a new train of thought arising out of our observation and awareness about the whole of our mental existence and its interactions and interconnections including a focus on what we would like to do and can be done about what we have identified and become aware of. Thus it is actually a next stage or new level of combination of our perceptual, emotive, intellectual and other mental processes (like memory, imagination, etc.), which includes planning, projecting, etc. in relation to the modification and engineering of our mental processes. If we do not assign this new meaning to the term ‘Consciousness’ then we will fall back to its existing connotations and usage in the Sciences (theoretical and laboratory) and Social Sciences, including contemporary Philosophy. In all these areas of inquiry there is a focus on micro aspects and piece meal examination of those parts of the mind which can then be defined and assigned to separate sub areas, i.e. Neurobiology, Psychology, etc. In Consciousness Studies the focus is on finding out what is ‘Consciousness’ (subjective awareness) and not on mental processes as a whole dynamic system. And again within that focus there are further sub divisions of zeroing in on anthropological consciousness, genetic consciousness or social consciousness, etc. The existing focus on ‘consciousness’ and its various forms has become so elaborate and sophisticated today that it is inhibiting a more systematic and holistic inquiry into the mental processes for the purpose of intelligently knowing and changing them. So we need to clarify and demarcate and then assign the above technical meaning to ‘Consciousness’. From the above, it will be clear to the reader that this new process of awareness and consciousness is not something we are born with. It is like a specialized body of knowledge which can only be acquired later on in life as one grows and is able to see new dimensions of mental capabilities and functions. So this new consciousness will be one such specialized mental faculty with its own specific mental processes, tasks, applications and experience. Human beings while acquiring specialized knowledge and its applications in any area of inquiry discover and construct a whole spectrum of new mental and emotional functions and dimensions in addition to the specific mental and emotional programmes they are born with. New mental breakthroughs in specialized understanding of any phenomenon are accompanied by corresponding pleasure experiences which are not coming from any pre-existing emotional programme but from new mental processes and dimensions which are a product of the process of specialization. Thus a similar process of specialization will take place when we start gaining knowledge about our mental processes and reach a developed stage of this process where the new ‘awareness’ of mental processes emerges along with corresponding emotional pleasure processes, which one was not born with. Then we have an emerging new mind. What that practically means is that many layers of this new awareness accumulate as a result of which our consciousness is able to see not only our existing dimensions but also project their potential. When we acquire this capability of projecting and adding up our new and acquired mental functions then at that point arises the question of whether a new mental phenomenon is emerging or not. If we find it emerging as a reality, then the issue of cutting its mental natal cord with the old, arises. When our new ‘consciousness’ can see and identify the new emerging mind, not as a tool-kit for biology but a phenomenon in itself, then the cutting of the mental natal cord becomes possible and necessary. It has to be a positive discrete step, as it is in the case of a child, arising out of the awareness that there is another phenomenon which will grow to be a counterpoint and then a successor. Hence it now needs a disconnection from the old and its existing source of nourishment and has to be nourished and developed through a different way and with new sources. The above will be the character and direction of the new unfolding awareness and consciousness of the mind which will eventually result in the new mind becoming a concrete crystallized reality and an autonomous identity with its own dynamic and far greater capabilities of intervening and changing not only its own self but the outside world. What is ‘mental’ cognition or ‘cognizing’ and ‘observing’ the mental complex; distinction between ‘normal’ cognition and ‘mental’ cognition It is only at the present juncture of the civilizational process that ‘cognizing’ or ‘observing’ the mind can be proposed on an intelligent plane or an unprecedented footing than how this process has been conceptualized and practiced in hitherto mystical/spiritual traditions or even in the modern intellectual methodology, which uses techniques like introspection, meta-cognition, etc. The existing concept and practice of self-consciousness or ‘know thyself’, wherein man starts becoming aware of his emotional and mental states apart from biological needs and starts claiming that he is observing his mind, is not the observation we are proposing. In fact, in our view, this concept of ‘observing’ the mind has a basic flaw. We cannot seriously or objectively observe until we develop a new mental capability, which is sufficiently external to and apart from the mental and emotional processes which we want to observe. An observation process which is a part of the preexisting mental processes cannot produce the ‘observation’ and ‘consciousness’ we are proposing. That is not an intelligent process based on acquiring sustained, systematic, objective observations and then processing those observations to produce a holistic understanding which can be called ‘consciousness’ of mental processes. So we need a new process of observation producing a new form of ‘consciousness’ of the mind.We have today a sufficient knowledge and understanding base (scientific and otherwise) to undertake a deeper and serious process of cognition and observation, which would be the first real step toward a sustained and holistic engineering of our mental processes. And as mentioned earlier this will be an exercise that will be aimed not only at restructuring the existing functioning of the mind but also its core design/architecture and logic of the existing connections and interactions of its fundamental processes and functions. In this regard, the first issue that we need to become aware of when we talk of ‘cognizing’ or ‘observing’ the mind is the concept of ‘mental cognition’ itself. In Scientific Literature it denotes a process of mental processing of sensory data after which some conclusions or say inferences are drawn within the mind about that data, which is primarily about the outside world. There is internal data in processes like meta cognition but that is about our existing learning, memory or normal functioning of the cognitive process, when it processes sensory data coming from external stimuli. In reality therefore, this process of mental cognition is really inferential cognition and not primary cognition. By the latter we mean a direct cognitive process like that of sensory perception, which is able to directly receive information about the source. Thus if there is to be any ‘cognition’ of the mind itself then it will have to be such a direct primary cognition at the mental level with no intervention of sensory process based cognition. Logically some form of primary cognition or we can say internal sensory perception would already be existing within us and even in animals. This form of a perceptual process would be built in into the interconnections between the brain, mind and other physiological systems of the body. It is through such an internal perceptual process of sensing and reception that the mental and physiological systems would be interacting with each other in a coordinated manner. This system would be operating in terms of a composite of rarified massless energy forms which constitute mental processes and the heavier electrochemical energy processes, in which the brain and body operates. There would be interface and transduction between the various levels of energy processes for classification, relevant processing and then corresponding responses. But this entire process would be taking place at the unconscious level and with the sole purpose of coordination between the different biological systems and also with mental functions for smooth and stable integrated functioning and survival of the biological form. We are proposing an addition to this process in the form of conscious internal perception focusing on accumulating direct perceptions and cognitions of mental processes and for the purpose of their intelligent restructuring and engineering. If we are to observe the mental processes more deeply and clearly in terms of their design, components, energy constituents, critical evolutionary interconnections and interactions (with the genetic and biological processes) and their logic and the deeper (not superficial) layers of their functioning then we need this new internal ‘perception’ and ‘cognition’ capability. This type of ‘cognition’ will be a new mental process and capability which we will have to learn to generate in order to supplement our existing cognitive capability, both external and internal. In order to generate this new internal cognitive capability and process we will need to disconnect from our existing plane of observation and cognition and come up with new mechanics apart from the mechanics of the existing cognitive process, which are actually biological because they are based primarily on external sensory data and process. Which means they are in terms of the familiar and tangible electrochemical or other such heavier energy form based components, while the conscious mental cognition that we are proposing will be in terms of the energy constituents and components or building blocks of mental processes i.e. some lighter pre-big bang massless magnetic energy forms. Because in our view direct primary cognition of mental processes can only take place at the level at which they actually exist and operate. The logic of what we are saying can be discerned from the following statement of Brian Greene where he says “As a general rule, the size of the probe particle that we use sets a lower limit to the length scale to which we are sensitive… Useful probe particles cannot be substantially larger than the physical features being examined; otherwise, they will be insensitive to the structures of interest”. Keeping the above in mind let us first take a look at the basic logic of the existing external cognitive process. The energy sources that our senses perceive today, historically speaking, existed before our receptors came into being. It was during the process of evolution that we succeeded in producing certain sensors or receptors which could operate within a certain range of the energy spectrum and receive the energy radiating or coming within that range, in a discrete manner. Which means they became capable of classifying the incoming data. So living things then acquired the critical (to survival and further evolution) capability to classify the energy spectrum received. This is a very important point because it is for this classification that precisely calibrated receptors were gradually evolved, which enabled living things to detect micro ranges of energy and also distinguish between one range and another. And then also develop the ability to relate the discrete parts of the received energy to its source, and that too not generally but precisely. So this must have been an internal process that would have emerged and then developed in living things as they moved on to become more complex. In addition, it is through this process that specialized sensory organs would have gradually evolved for perceiving a range of energy processes in the environment of a living thing. This is the process of external sensory perception. In human beings it has become quite complex and many-layered because of the exponential growth in the data it has to perceive and has even been extended through scientific or laboratory tools. We are of the view that a similar process will be needed for systematically cognizing the human mental processes and that will be this new process of ‘internal sensory perception’ or direct mental cognition. Similar to external sensory perception, this process will also have receptors (a sort of sensing mechanism of weak massless quantum energy formations) sensitive to mental energy constituents, which would detect, record, resolve and classify (not broadly but at the most micro level, which means as micro as the source can produce), the different ranges of mental energy signals that arise at source, from the various mental processes themselves. And then relate each discrete part of the received energy to its source in specific terms. Its resolution in micro detail will depend on how resolved the mental energy radiation or signal is at source and how specific are its components. Ideally the receptor should be able to resolve it at the same micro level at which it originated; logically it cannot do more than that. And this ability to resolve would increase in proportion with the information needed from the spectrum of mental energy signals; the more the information needed the more will have to be the ability to resolve it into micro ranges. It is only after separating the different ranges from the lump of energy received or cognizing discretely its components that it will then proceed to classify them in different categories and then proceed to interpret them. So as to faithfully transmit the data about the state of the source to the next stages of mental processing of that cognized data. Since this entire process will not involve any biological sensory process (which is actually external to the mind itself) so it will be largely non-conflicting with the existing heavier energy substrate of biological processes within us; it will not disturb the atomic and molecular processes or our existing biological/physiological functioning (including the existing brain processes involved in generating mental processes). It will operate primarily at the nonverbal level and largely outside the confines of our existing verbal and normal conscious (in the sense of simple awareness) processes. But verbal inputs will be integrated in this process as they are done in the process of external cognition. Most importantly, it will gradually need to become a specialized mental organ like the various biological sensory organs of perception. Accumulated cognitions of the mind in its various layers (surface to deeper ones) and interconnections will over a period of time result in the formation of a specialized mental process of direct mental cognition, whose data will then go as input into the next stages of processing of those cognitions; producing conclusions and generating the Will for the implementation of those conclusions. In time, the data processing and then Will processes in relation to these accumulated cognitions about the mental processes in all their layers, interconnections and dimensions, will in themselves become specialized and elaborate processes. It is only then that the individual will acquire the capability of engineering and the process of mental restructuring will commence as a sustained and deeper process entailing both modification, pruning, eradication of existing mental patterns and processes and the generation of new mental capabilities and processes. The above post-mental cognition process will be undertaken by our intellect in consonance with our developed sensitivity process or, in other words, by an intelligent composite of our verbal and nonverbal mental processes. Both these processes are perfectly capable of undertaking this process of internal perception of the various mental processes just as they have been carrying out perception of external phenomena (both tangible and intangible to existing tools like strings, black holes, etc). In fact, at an unintelligent level they already observe and interact with the various mental processes and also modify the various pre-existing mental states and functions. Only now they will be intelligently developing this new process and mechanics of cognizing the mind in all its layers as an objective process and then making serious interventions in its design and functioning so as to restructure existing and make new mental products. An obstacle in direct ‘mental cognition’ Once we begin the process of observing the mind through direct mental cognition we will confront an obstacle in this process. This obstacle is concerned with how our mind functions when it begins to acquire observations about a phenomenon. Alongside the process of acquiring observations about a phenomenon as it is, one keeps interpreting that information but a stage comes during that process when one needs more observations and one is unable to get them. Because the obvious resources of observation and cognition have gotten exhausted and it is now difficult to get more observations. So there what one is liable to do is to substitute thinking for observation. That is, one starts drawing inferences from the acquired direct observations and then starts considering and treating those inferences on the same level as observations. So then we start engaging in a process of pseudo-reasoning and cheap inferences based on our existing mental software and programmes instead of strict reasoning in which one is able to intelligently see the distinction between inferences and direct observations and not substitute the latter with the former but persist with the process of acquiring more direct observations. And then through strict reasoning draw strict inferences on the basis of those observations, which are tentative and subject to verification during their application. Thus while observing and cognizing the mind we need to be aware of this hazard in our existing method of reason based inquiry and investigation of phenomena. i. The normal form of ‘awareness’ that we experience is of the tangible reality of our biological existence and its interactions, which we call ‘perception’ and then we have thinking based on this perception. ii. We have two types of experiences of reality; one through awareness based mental processes and the other through perception based mental processes. And we need to grasp and appreciate the distinction between them. iii. Up to now whenever any questions and problems arose about one’s own thinking, whether it was in conjunction with outside phenomena like stars, heavens, time, eternity, etc., or something purely internal, it remained a function or process. Because there one was formulating problems of one’s internal mental processes like feelings and ideas, collaterally with things that one observed in the outside world. iv. If we have an on-going capability which is going to operate and unfold from time to time in different ways then we also acquire the capability of projecting it. vi. Just an aside here. Cell Biologist Bruce Lipton thinks that human cell membranes might have receptors which could detect the energy signals of thought as well in addition to other known signals that they receive. In our view, the process of mental cognition that we are proposing logically cannot take place through biological mechanics. The physical mechanics of the making of mental processes and consciousness could use such receptors in the cell membrane, just like Penrose and Hameroff’s microtubules, to harness the weak particle/quantum processes and use them as building blocks of mental processes. This could be the physical mechanics of the making of mental processes and consciousness but the mental cognition of mental processes themselves that we are proposing will need to use some other mechanics; these will have to be mental mechanics and not biological.
He proposed a new government to Parliament respectfully in a speech. He helped give the military confidence so, they could go into war with their head head high. In the end, the British won the war. Winston Churchill is remembered for being the backbone of the military and giving his time for the An Analysis of Churchill 's "Their Finest Hour" Speech Sir Winston Churchill was born to an aristocratic family in 1874, and he was the prime minister of the United Kingdom. Before becoming the prime minister of the United Kingdom, Churchill had a long career timeline. Churchill was a devoted citizen who loved and valued his country; and his entire previous career paved the way for him to take over the position of prime minister on May 10, 1940. The time on which Winston Churchill had delivered his "Their Finest Hour" speech was a time when Europe had witnessed the defeat of the French to the Germans. Churchill 's speech aimed at giving hope to the people and motivating them to keep fighting against the German army. He made a big change in the world just with his voice. His word made people to stand up and fight against their enemies and saved his country during the World War II. Winston Churchill was born as part of an aristocratic family in the United Kingdom. He headed to the military school when he was young, but later he stepped into politics. He had many fails and difficulties but finally he was selected as a First Lord of Admiralty and in 1940, he became a prime minister of UK. HISTORY ASSESSMENT TASK 1 - INVESTIGATING THE PAST CHOSEN INDIVIDUAL: Alexander Hamilton Alexander Hamilton was an incredibly intelligent and significant individual in America’s, and the world’s, history. Hamilton was one of America’s Founding Fathers and also features on the American ten dollar note. Alexander Hamilton became a Lieutenant Colonel and George Washington’s aide-de-camp in the Revolutionary War and helped lead America to victory. George Washington was impressed by Hamilton’s intelligence and courage, so he promoted him to be his assistant during the Revolutionary War which started on the 19th April 1775. Rockefeller: The Captain of Industry that has helped our country thrive “The best philanthropy” he wrote, is constantly in search of finalities- a search for a cause an attempt to cure evils at their source” - John D. Rockefeller John D. Rockefeller was the richest man of his time but, used his wealth to improve our country. Rockefeller entered the fledgling Oil industry in 1863, by investing in a factory in Cleveland, Ohio. In 1870 Rockefeller established the Standard Oil Company. With the establishment of the oil company Rockefeller controlled 90% of the oil business in America by 1880. William Jennings Bryan was a man that strongly believed in his faith and made sure to use it throughout his life and legacy. “Only Theodore Roosevelt and Woodrow Wilson had a greater impact on politics and political culture during the era of reform that began in the mid-1890s and lasted until the early 1920s”. This thesis begins the book and begins William Jennings Bryan’s legacy in the United States. Even though he ran for presidency three times he still gained popularity among the country. He was famous for his radical ideas and his eloquent speeches. Graduating with a degree in economics, Bill felt the patriotic surge to fight for his country, which was sailing towards the path of war. (Signal Woodrow Wilson, the twenty-eighth President of the United States, is well renowned for many accomplishments; of particular importance is being credited as the father of Public Administration. Although he argues for many different ideas and concepts, his end goal is always for the benefit of the people. This particularly resonates in his 1887 essay; The Study of Administration. In his critically renowned essay, The Study of Administration, he details his concepts of and for public administration. Roosevelt would go on to become our nation’s 26th President, as well as our nations’ first conservationist president. Although he was a sportsman and hunter for most of his life, he deeply mourned for the loss of animal species and natural habitat. A feeling which would eventually lead him to become a co-architect of The American Antiquities Act of 1906. The American Antiquities Act of 1906 was an Act written for the preservation of American “antiquities,” passed by the U.S. Congress, and signed by President Theodore Roosevelt on June 8, 1906. It gave the president power to protect our cultural heritage Winston Churchill should get more praise for what he is doing, because he was an outstanding politician, wrote incredible speeches, and became prime minister for Britain and Won World war II. To start off with, Churchill was a very political man, and many of his successes in life had came from being part of British politics. Many people thought that once Churchill switched his view from conservative to liberal, that was disloyal and opportunistic. Churchill 's role in the political community was one of the many reasons in which how he had made an impact on our world today. Winston Churchill was known for a few major changes during his time. But Washington made the ideas of the American founding real. He incarnated liberal and republican ideas in his own person, and he gave them effect through the Revolution, the Constitution, his successful presidency, and his departure from office. What’s so great about leaving office? Surely it matters more what a president does in office. Revere died on May 10, 1818, at the age of 83, at his home on Charter Street in Boston. He is buried in the Granary Burying Ground on Tremont Street. paul Revere was a great example of an ordinary man who became politically involved symbolic to the american revolution. As someone who risked everything to make As I reflect over the past presidents of the United States, I realize that there have been many triumphs, as well as many trials. These successes and failures have influenced the nation to be the way it is in the present time today. President Taft and President Wilson had many accomplishments and failures that I have recently learned about that caused me to reflect on the history of the United States presidents. Through their accomplishments, as well as failures, there is much to be learned and remembered. Before Herbert Hoover served as America’s 31st president during the years 1929 to 1933, Hoover accomplished global success as a mining engineer and worldwide gratitude as “The Great Humanitarian” who fed worn torn Europe during and after World War I. President Hoover brought to the presidency an outstanding reputation for public service as an engineer, administrator, and humanitarian. When the Republican convention in Kansas City began in the summer of 1928, the fifty-three-year old Herbert Hoover was an the boarder line of winning his party 's nomination for president. He had won many primaries in California, Oregon, New Jersey, Massachusetts, Michigan, and Maryland. Among important Republican supporters he had the help of women, progressives internationals, the new business elites, and corporate interests party regulars grudgingly supported Hoover,but they never trusted him. The convention voted Hoover on the first ballot teaming him with Senate Leader Charles Curtis of Kansas. Through his pertinacious proficiency, Wilson abled himself to manage his administration in an impeccable manner and in turn gained near sovereignty over the Legislative and Judicial branches. In addition to his exceptional use of the American system, Wilson, with his prominent executive authority, sought to pass his “New Freedom” through legislation. “Wilson was responsible for the longest list of reforms ever seen in the U.S. until Franklin Roosevelt’s New Deal a generation later. His entire [the New Freedom] reform package, including tariff, banking, labor and tax-related issues, passed in Congress by
Words by Helena Klintström “When governments say they are prioritizing sustainability and equality, maybe we should ask to see the stubs in their chequebook. We could ask, how big are your schoolyards and playgrounds? How easy is it for people to find green space or outdoor recreation?” Urban development is the physical manifestation of our values as a society — it’s the tangible, undeniable evidence of whether or not we mean what we say. Look around and ask yourself: who is this place built for, and what needs does it cater to? The urban form reveals where our true priorities lie, in terms of housing, transportation or public space. We can see how much a society values older people, for example, by how easy it is to get around using a walking stick or how much seating or shelter is provided. Today, many governments, municipalities and organizations have set ambitious targets connected to the UN Sustainable Development Goals (SDGs), recognizing that we urgently need to make cities more sustainable and equal. So, to hold them accountable, how about we use health equality in the built environment as an indicator? Health equality is a very powerful litmus test for sustainability, not only because it is measurable, but also because when you create the conditions for wellbeing, you create places that perform well on many other levels. If people can meet their day-to-day needs by walking or cycling, and it’s safe for everyone to do so, this not only promotes active lifestyles and social interaction, but reduces carbon emissions and air pollution from cars. Green space in a city supports physical and mental health, but also climate change adaptation by offering protection from flooding and extreme temperatures. What is good for health equality is good for pretty much everything else. Looking at urban development over recent decades, it’s clear that reducing inequality in health has not been a top priority. The gap in health outcomes and life expectancy between different groups is widening, and both are strongly linked to geography. In Sweden, where I live, reducing inequality is the one area of the SDGs where we are failing to make progress — and we are actually moving backwards. Rising inequality in health means that if you live in one part of Stockholm, you can expect to live several years longer than if you live in another part of the city. It’s a similar picture in many other major cities around the world. Urban planning has been guilty of building large swaths of suburban homes and massive high-rise housing estates that concentrate socioeconomic groups in particular areas. It has ploughed highways through residential communities, creating noise and pollution and preventing people from accessing amenities in nearby neighbourhoods. This makes it harder to deliver a full range of services and to create well-functioning transportation networks, and it contributes to loneliness and poor social cohesion. But as urban planners, we also have the ability to resolve the physical issues that contribute to health inequality, not only the distribution of green space, but also healthcare services and pharmacies. We can see how access to healthcare — how long it takes to get to the hospital, how easy it is to pick up your medication — affects outcomes, and Covid has only emphasized how important this is. More equal cities will also be more resilient to future pandemics: researchers in Philadelphia found that the location of vaccination centres and the availability of transportation was a significant barrier to vaccine take-up, which disproportionately affected Black and Latino communities. Some of the things that we can do to tackle health inequality are self-evident: it’s easy to see that walkable neighbourhoods, green space and active transportation support healthier lifestyles. Others are less obvious, such as not creating large monocultures of housing types or building physical barriers that segregate communities. None of these ideas is really new: aren’t these the same things we talk about when we talk about quality of life, or climate neutrality, or child and age-friendly cities? "Looking at urban development over recent decades, it’s clear that reducing inequality in health has not been a top priority" Over the next decade, we need to make rapid progress on sustainability, on health and on equality. So when governments say they are prioritizing these things, maybe we should ask to see the stubs in their chequebook. We could ask, how big are your schoolyards and playgrounds? How easy is it for people to find green space or outdoor recreation? How many people are living in overcrowded conditions or reliant on emissions-generating transport to get to where they need to be? This is not about planning cities or doing business in a completely different way, it’s about tweaking current models and integrating a new mindset. There is a growing wave of good examples around the world, from highways retrofitted as parks, to social value used as a selection criteria in public procurement, to collective housing models that bring together older people and kindergartens or students. We need to use these to demonstrate what we can do to help, simply by thinking about health a little differently. Health equality is not yet top of the agenda for every society, but it is only a matter of time before it will have to be. It will become impossible to create thriving economies or businesses without it, and in the years to come, the winners will be the governments and corporations who succeed in delivering on this priority first.
Please Bring Back the Puddings! I recently ran across online portions of an interesting book, edited by Harlan Walker, titled Disappearing Foods: Studies in Foods and Dishes at Risk (Prospect Books, 1995). The book includes an article written by Mary Wallace Kelsey called “The Pudding Club and Traditional British Puddings.” It celebrates a resurgence of the quintessential British boiled pudding. Where did we go wrong? Ms. Kelsey’s article prompted me to ask a couple of question: where did we Americans go astray in our understanding of what a pudding is? Pose the question, “what is pudding?” to any American you know, and you’re likely to get a raised eyebrow and a sideways glance as if you’re from a different planet. Anticipated answers will likely include the words chocolate or vanilla, or maybe lemon, pistachio, or butterscotch. You’ll likely be told that it can be found in your local grocery next to the gelatin desserts (usually going by the same brand name). And someone may even tell you that it’s a dessert commonly served at hospitals and all-you-can-eat dinner buffets. So my curiosity got the best of me and I started to research the topic. I wanted to know if there was some remote historic connection between the virtually extinct boiled pudding and the plastic cups of pre-made stuff Bill Cosby used to hustle to our children. I’ve concluded there is a connection. Maybe we wren’t wrong after all. A Brief Pudding History If one looks at the old recipes for pudding, it rapidly becomes obvious (and many historians and etymologists agree) that the meaning of the term is difficult to pin down. The word appears to find its origin in an old French term describing a blood-sausage stuffed into animal intestines and stomachs (and…um…other…parts) that the Normans brought with them as they invaded the British Isles in the 12th century. A modern direct descendant of these original puddings are the black and white puddings of the United Kingdom and Ireland — boiled, sliced, and often fried up for breakfast. Puddings really exploded onto the culinary scene around the 14th century when someone discovered that a piece of cloth was a viable substitute for natural casings. Woohoo! No longer did diners have to wait for the next autumn slaughter! Puddings could be made year-round! The pudding bag was here to stay! …At least for the lion’s share of the next five or six centuries. Puddings were often boiled alongside the meat. They were likewise often served prior to or along with the meat course so that less meat would be required to satisfy hungry appetites. But as sugar became more widely available, it began to alter the palates of English societies. Even savory dishes, including puddings, were often seasoned with sugar. Eventually, the definition of pudding began to apply to a broader collection of foods that weighed heavily on the dessert end of the table. A White Pot with a topping of sugar being browned with a period salamander There were dozens, if not hundreds of different kinds of puddings: boiled puddings, dripping puddings (e.g., Yorkshire), plumb, marrow, and pastry puddings. There were regional and local puddings. There were bread puddings that used bread crumbs and bread-and-butter puddings that actually used slices of bread (e.g., a white pot). There were apple puddings that we would now call apple dumplings. There were also quaking and custard puddings (e.g., “Flummery”), made primarily of egg and milk with only a fraction of the flour seemingly necessary to hold it together. Another pudding type was called Blancmange. Different from custard (which is thickened with egg), blancmange is a dairy dessert thickened originally with either isinglass or calve’s feet jelly, and by the turn of the 20th century, with corn starch. There were different kinds of manges, depending on how they were flavored and/or colored. Blancmange colored with cochineal, for instance, was called rougemange, and that colored with spinach was verdemange. It was only a matter of time for some heroic cook to slip chocolate into the equation. An End to Finger Pointing Suddenly the debate over which dish has rightful claim to the name falls silent. (O.K., I haven’t heard anyone actually debate this besides myself in my own head.) It’s a fairly short journey through early 20th century cookbooks to link custards and blancmanges to Bill Cosby. A boiled plum pudding and a dish of instant chocolate pudding are actually both members of the same food-family tree. Think of them as distant cousins, having descended down different evolutionary branches of this broad food category called pudding. My assumption that we Americans had gone astray in our perceptions was incorrect. Both of these very divergent pudding styles seem to have legitimate claims to the throne. And as far as that goes, there are other modern foods that could chime in as well if they wanted to. Take, for instance, our Thanksgiving turkey stuffing and pumpkin pie. They both started as puddings. And the black sheep of the family — the Christmas fruitcake? You guessed it. So Where DID They Go? So going back to Kelsey’s article, my next question is, Why did boiled puddings disappear? Kelsey spoke of their disappearance from British tables, but they were once also very popular in America. Most English cookbooks used in early America were British (or heavily influenced by British cooking). So it’s no surprise to find a plethora of boiled pudding recipes even in those earliest “American” works published in Philadelphia and Boston. It’s interesting to realize, however, that even as American cookbooks began to reflect a distinctively American cuisine through the 19th century, British pudding recipes continued to hold on. It was only in the 20th century that they were finally nudged out of print by various custard and mange-type recipes going by the same name. I managed to find a remnant bag pudding recipe from as late as 1937 in the Pennsylvania-Dutch Cook Book by J. George Frederick . Frederick reflects public sentiment by calling such dishes “poverty puddings, out of the thrifty colonial past.” I believe there are several reasons why boiled puddings disappeared off the American culinary landscape. It was a slow death that may have started even at the height of its popularity. First, American colonists relied heavily on corn, as most of the wheat crop grown in North America was exported to Britain. Maize was considered by the British as suitable fare for Yankees. Beyond that, it was animal fodder. (see more on this topic in an earlier video we produced on Early Corn Bread.) The exportation of much of the wheat crop would have naturally limited the primary ingredients for pudding: flour and bread. Some of the earliest distinctions in American cookbooks were made by the additions of Indian Pudding recipes made with corn flour instead of wheat flour. Another early contributor to the boiled pudding’s demise was likely the development of pearl ash, saleratus, and finally baking powder. These chemical leavening agents appear to have steered preferences away from heavier foods to lighter fare. Frederick mentioned this preference in the opening remarks in his chapter, “Dutch Puddings and Desserts.” Another contributing factor was advances in kitchen technology. With the continued development of kitchen ovens, it became easier and more reliable (as well as more efficient) to bake than it was to boil. Consequently, as pudding recipes developed in the 19th century, more recipes called for the puddings to be baked or steamed with a water bath rather than boiled. Another likely factor was simply the amount time required to make a boiled pudding. Full-sized bag puddings typically required boiling times between four and six hours. During that time, cooks had to keep a watchful eye on the pot to make sure it didn’t boil dry, and when additional water was needed to be added, it had to be boiling water so that the cooking time wasn’t extended any longer than it already was. Frustration over lengthy cook times can be felt even in early 18th century cookbooks. The answer to this was hasty puddings. Hasty puddings were actually a category of puddings. Any pudding that required less time to cook, for whatever reason, can be considered a hasty pudding. For American colonists, corn mush was a common form of hasty pudding. It didn’t take long at all for the corn meal, boiled with disproportionate amounts of water, to thicken up. Other hasty puddings could be made from larger pudding recipes simply by divvying the dough or batter into smaller portions. For instance, the earliest known bag pudding (the “College” or “Cambridge” pudding) soon became the “New College” pudding. These recipes were in essence the same, but the mix of ingredients in the New College recipe was divided into smaller dumpling-size portions and either fried or boiled. The bag was dropped altogether. And finally, the last nail in the boiled pudding’s coffin was likely the changing public perceptions regarding a key ingredient in most puddings: suet. It fell out of general favor with a society that was increasingly becoming more health conscious. Suet has since been relegated to bird food. Jennifer McLagan has much to say on this matter in her 2008 cookbook title Fat: An Appreciation for a Misunderstood Ingredient. There were possibly other reasons for the boiled pudding’s disappearance from American Cuisine, e.g. regional and ethnic influences. But these that I’ve mentioned are the most significant. But Wait! May I Please Have Seconds? Here is a recipe that might justify a unified grassroots effort to resurrect the boiled pudding back from the culinary grave. It’s called “Puddings in Haste” from Maria Rundell’s 1814 cookbook “A New System of Domestic Cookery” (originally publishes in 1807). Be sure to watch the video below as Jon prepares this dish. Rundell conspicuously omits the measure of ingredients. Comparing it to a number of other period recipes, here are our recommendations: 1 cup dried bread crumbs 1 cup grated suet* 1/2 cup raisins, chopped, or Zante currants** grated zest of 1/2 to 1 whole lemon 1/2 teaspoon dried ginger powder (double that if you’re using fresh ginger) 2 eggs plus 2 egg yolks a little flour for dredging Bring a good size pot of water to a roiling boil. In a large mixing bowl, mix the first four ingredients together until they are well incorporated. *DO NOT use hard muscle fat for this! Use true suet (kidney fat). Be sure to read my earlier post on Suet for more information. If you can’t find true suet, you’re better off using very cold diced butter or frozen vegetable shortening than you are using hard muscle fat. If you opt for either of these substitutions, you’ll have to work fast. ** If you don’t like raisins, try some other dried fruit, chopped fairly small. Whisk together the eggs along with the ginger. Mix the dry ingredients and wet ingredients together. Divide the stiff dough into equal portions, and form into balls or dumplings about the size of a small chicken egg. Roll each dumpling in flour and lower them into your boiling water. Boil them for 15 minutes, stirring them on occasion to prevent them from sticking. After 15 minutes, these little puddings will look soggy and somewhat gray. They can be eaten right away, or you can allow them to sit for a little while and they stiffen up and improve in appearance. These puddings can be served hot or cold. Finish them up with a sprinkle of sugar, a little honey, maple syrup, or a delicious “pudding sauce” made of equal parts melted butter, sugar, and sack (sherry wine). There are a number of 18th century recipes that I consider really good…for 18th century food, that is. THIS dish, however, will likely be served at the next party I attend!
How Non-Profit Hospitals Are Driving Up The Cost Of Health Care Last year, when New York Governor Andrew Cuomo was battling to win the Democratic primary, his campaign solicited a donation from the Greater New York Hospital Association, according to a recent report from . The hospital lobbying group gave over $1 million to the New York State Democratic Party. And not long after, according to the Times,"the state quietly authorized an across-the-board increase in Medicaid reimbursement rates." The increase is expected to cost taxpayers around $140 million a year. The hospital lobby is a juggernaut in New York, as it is in other states. Over the last year, hospital lobbyists have fought reforms for in Ohio, in Illinois, and in North Carolina. Last month, from the Kentucky Hospital Association showed that it was urging members to donate to gubernatorial candidates to "assure access." In Washington, D.C., the hospital lobby is as well as efforts to end surprise billing, which is when Americans go to in-network providers but then — surprise! — end up getting billed for more expensive, out-of-network services. say they oppose the practice, and leaders from both political parties have been working to end it. Yet, hospital lobbyists are . Which is weird, because most hospitals are nonprofits. More Than Just Quid Pro Cuomo? A by Yale School of Public Health economist Zack Cooper and colleagues takes a look at hospital politics and helps shed light on why American health care is so insanely expensive. In 2003, President George W. Bush began fighting for a major expansion of the Medicare program. The Bush Administration knew it would be a hard sell, alienating small-government Republicans and putting Democrats in the awkward position of supporting Bush's agenda before an election year. Cooper says their study was inspired by one of his grad students, who served as a congressional aide when this legislation was being passed. "And the rumor was the U.S. Health Secretary, Tommy Thompson, was on the floor of the House with a notebook, writing down members of Congress who voted for the bill," Cooper says. Thompson allegedly did this to sweeten the deal for lawmakers on the fence, offering to reward supporters by "bumping up payment rates to hospitals in their districts" through a special provision, Section 508. Cooper and his colleagues have spent years investigating whether this was true, filing Freedom of Information Act requests and crunching data. They've uncovered evidence that suggests it was true. They find that legislators who were on the fence and voted "yea" for the legislation were 700% more likely to see a large bump in Medicare payment rates to hospitals in their district. Between 2005 and 2010, Congress shelled out over $2 billion to 88 hospitals through the horse-trading Section 508 provision. It was a clear win for these hospitals, which spent the money on more equipment, buildings, services, and staff. Dropping opposition to the Medicare expansion also ended up being a political win for lawmakers on the fence. Not only did the special provision funnel extra federal funds to their districts and create jobs; the lawmakers ended up seeing a 65% increase in contributions from people who worked in their state's health care industry and a 25% increase in overall campaign contributions. "It's suggestive to me that this was in a sense a quid pro quo," Cooper says, adding that their analysis shows how health care spending becomes a "piggy bank" for political influence. Giving New Meaning To The Term "Nonprofit" "Hospitals are the largest individual contributor to health care costs in the U.S," Cooper says. Americans spend a year at hospitals. That's about of national health spending, which now consumes of U.S. GDP. Cooper's research shows that, after a long period of consolidation, the cost of hospital services has been exploding. Between 2007 and 2014, hospital prices . The irony is most hospitals are "nonprofit," a status that makes them tax exempt. Many (but not all) do enough charity work to justify tax benefits, yet it's clear nonprofit hospitals are very profitable. They funnel much of the profits into cushy salaries, shiny equipment, new buildings, and, of course, lobbying. In 2018, hospitals and nursing homes spent over $100 million on lobbying activities. And they spent about $30 million on campaign contributions. Health industries have also been funneling hefty sums into dark money groups. But their political power isn't just the result of lobbying or electioneering. Hospitals are often the biggest employers in states and cities across America. Health care reformers direct much of their ire at the nation's health insurance companies. Perhaps they're the easiest targets because they're faceless paper-pushers, located outside their districts or states, who are often the only entity in the system controlling costs. Studies suggest insurance administration and profits do contribute to wasteful health care spending, but they're just one contributor to a bloated system. Hospitals, which often escape criticism, are a significant part of the problem. We reached out to the Greater New York Hospital Association to get a response to criticism of the appearance of a quid pro quo between them and the Governor of New York. They stressed that state hospitals "hadn't received a Medicaid rate increase in 10 years" and that while they "aggressively lobbied" to change this, they deny the interpretation that suggests their large donations were motivated by increasing Medicaid rates. They say, instead, the donations were aimed at defending the Affordable Care Act from "relentless attacks" from lawmakers in the nation's capital.
Most handplane geeks know that across the Pacific Ocean there is an entire culture of people who are even more obsessed with the mechanics of cutting wood with a plane than we are. I’m speaking, of course, about the Japanese, who are prone to holding handplaning contests where participants compete to see who can make the longest and thinnest full-width shaving. They measure the thickness of these champion shavings in microns. And the results are often affected by the weather. A wet day will swell the shavings by a few microns. Sadly, Western woodworkers have become obsessed by creating ultra-thin shavings, which requires planes to be tuned to a very high note. What’s wrong with this philosophy is that it focuses on the garbage instead of the good stuff. The shavings get thrown away, remember? It’s the resulting work surface that we keep – unless we handplane that all away in some handplaning bliss-fest. You want to be able to take the thickest shaving you can without tear-out, chatter or requiring you to bulk up like Thundarr the Barbarian. A thick shaving will get you done with fewer passes of the smoothing plane over your workpiece. Not only does this get the job done faster, but it also helps increase your accuracy. Huh? Think about it. If you make 20 passes over a board with a smoothing plane, you are much more likely to plane that sucker out of true than if you used only four passes. So how thick should your shaving be? Good question. Most people talk about getting shavings that are less than 2 thousandths of an inch thick. Or they talk about “sub-thou” shavings. Yes, it’s all very empirical, except for the fact that few woodworkers know how to really measure shaving thickness. Squeeze a dial caliper hard enough and you can make almost any shaving into a “sub-thou” shaving. Wood compresses. Metal bends. So I go for visual cues instead. If the wood is well-behaved, I go for an opaque shaving – that is, as long as the curvature of the cutting edge of my iron is significant enough to keep the corners of my iron from digging into my work. I’ve included a photo above of what this shaving looks like. This shaving gets the work done fast. If the surface has been flattened by a jointer plane, a shaving like this will get the work done in one or two passes. If I get tear-out using a beefy shaving, I’ll retract the iron fully into the mouth of the handplane and extend it until the shaving looks like the photo above. Here you can see the shaving is thinner, but it is still intact except for one area. That split in the shaving is probably caused by a small defect in the iron. The edge is probably getting dull and is ready for a touch-up. This shaving will clean up my surfaces in three of four passes. It usually eliminates tear-out more than the shaving above. But sometimes I need to get a little nuttier. And that’s when I push my tool to get a shaving like the one above. This thing is about to fall apart. In fact, it sometimes will fall apart when you remove it from the mouth of the tool. Usually, this sort of shaving requires some persnickety set-up to achieve. I can’t get this shaving with an Anant, new Stanley or Groz plane. They are just too coarse to allow this type of shaving to pass. This is what you are paying your money for when you buy a premium tool. Premium tools will do this with little fettling. My vintage planes that I’ve fussed over will do this as well. A sharp iron always helps, as well. The downside to this shaving is that you will be making a lot of them to remove the tear-out on the board. About 10 cycles or more is typical for some small tear-out. It is a lot like working. Can you get nuttier? Sure. If all else fails, I can set my plane to remove something between a shaving and dust. These “shavings” don’t really look like much. How do you get them? That’s easy. When I get my thinnest smoothing plane shaving possible, I’ll rub some paraffin on the sole of the tool. This actually reduces the depth of cut just enough to get the furry, dusty stuff. Beware: Taking a shaving that small will force you into a lot of work. Lots of passes. Lots of sharpening. But when you need it, you need it.
Governments should pull out all stops to implement a green recovery as this was the best way to bring the world close to the 2°C pathway, according to the UN’s Emissions Gap Report 2020. The report urged countries to follow this path because despite a dip in CO2 emissions as a result of the COVID-19 pandemic, the world is still hurtling towards a temperature rise of more than 3°C this century. A green pandemic recovery would cut around 25% off the greenhouse gas emissions projected in 2030, the report found. The levels of ambition agreed upon in the Paris Agreement will have to be tripled in order to stay on the 2°C pathway and increased five-fold if the 1.5°C target is to be achieved, the report stated. RTI reveals: MoEF&CC asks PMO to rethink disengagment from WII Right to Information (RTI) data revealed the Union Ministry of Environment, Forest and Climate Change (MoEF&CC) has told the Prime Minister’s Office (PMO) to reconsider a proposal for the former to disengage with the Wildlife Institute of India (WII). The ministry also informed the PMO that it had agreed ‘in principle’ to disengage with the Indian Institute of Forest Management (IIFM). The comments were made in reference to a Department of Expenditure (DoE) report that recommended the MoEF&CC disengage from five autonomous institutions under it, including the WII and the IIFM. India announces committee to implement Paris Agreement targets The Indian government set up a panel that will look into the implementation of the country’s Paris Agreement targets, including its Nationally Determined Contributions (NDCs). This will be a high-level inter-ministerial committee, according to a Gazette notification issued by the Ministry of Environment, Forest and Climate Change (MoEFCC). Climate goals in jeopardy as fossil fuel production set to rise: Study A new report found that major economies were planning and projecting an average annual increase of 2% in fossil fuel production. This despite the COVID-19 lockdown and research indicating the world needs to cut down production by 6% every year in the next decade in order to remain on the 1.5°C pathway, a special issue of the Production Gap Report stated. The report measured the gap between the Paris goals and countries’ planned production of fossil fuels. According to the report, global coal, oil and gas production would have to decline annually by 11%, 4% and 3% annually in order to achieve the Paris goals. But while the pandemic has resulted in short-term reduction this year, post-Covid stimulus measures will continue to widen the pre-Covid production gap, the study found. Biden names BlackRock alum his top economic adviser US President-elect Joe Biden named Brian Deese as his top economic adviser. Deese was a key negotiator of the Paris Agreement and helped the Obama administration bail out the automobile industry. Climate activists have expressed dismay with Deese, especially in his role as head of sustainable investment at BlackRock Inc, and don’t think he is ‘aggressive enough on climate change’. New Zealand declares climate emergency, pledges to become carbon neutral New Zealand Prime Minister Jacinda Ardern declared a climate emergency and pledged the country’s public sector would become carbon neutral by 2025. The move, however, was termed ‘symbolic’ by experts who urged the government to do more in order to cut down the country’s emissions. New Zealand joins a growing list of countries, including the UK, Japan, Canada and France, who have declared a climate emergency. EU plan to cut shipping emissions finds no takers in international market The EU’s plan to expand its carbon market to shipping was opposed by Japan, South Korea and international shipping groups. According to the countries, the proposal will increase trade tensions and also emissions because ships would prefer taking longer routes to avoid stops in Europe where they will have to pay for pollution permits to cover their emissions. Currently, only power plants, factories and airlines running European flights need to get these permits. Australia abandons plan to use Kyoto credits to reach Paris emissions target Australia won’t go through with its plans to use Kyoto carryover credits in order to reach its emission reduction targets. The move to drop the controversial credits, to be announced by summit on December 12, is likely to be welcomed by other countries. It will put Australia in a strong position ahead of the Glasgow summit next year. It is also a major 180-degree change for Australia’s climate change agenda, as the government has previously claimed it is ‘entitled’ to use the ‘surplus’ units it had accumulated between 2008 and 2020 as part of its Kyoto protocol targets.
A Sole Proprietorship is the most basic type of business organization in the Philippines. It can be established by just one person, referred to as a sole proprietor. In sole proprietorship, the personal assets of the owner are held to answer for claims against the business since the business in a sole proprietorship is an extension of the owner. Hence, the assets and liabilities of the business are also the assets and liabilities of the owner. A One Person Corporation, or OPC, is a special corporation with a single stockholder. The concept was introduced in the Philippine corporate setting by Republic Act No. 11232, otherwise known as the Revised Corporation Code of the Philippines. Only a natural person, trust or estate can register an OPC. The incorporator, however, shall always be a natural person of legal age. The incorporator can be the trustee, administrator or any other person exercising fiduciary duties in the case of a trust or an estate. An OPC combines the best characteristics of a corporation and a sole proprietorship i.e. limited liability and complete dominion. Unlike sole proprietorship, an OPC has a personality distinct from the stockholder and, thus, the stockholder’s liability is limited to the amount of capital invested. A Partnership requires two or more people who agree to contribute assets, with the intent of dividing profits among all parties involved. The partnership has a juridical personality separate and distinct from that of each of the partners. A Corporation is comprised of many individuals (maximum of 15) who act as a single entity to advance the interest of the corporation as a whole. Corporations formed or organized by operation of law have the right of succession and the powers, attributes, and properties expressly authorized by law or incidental to its existence. It may be stock or nonstock corporations. Stock corporations are those which have capital stock divided into shares and are authorized to distribute to the holders of such shares, dividends, or allotments of the surplus profits on the basis of the shares held. All other corporations are nonstock corporations. A Cooperative is an association of persons with a common bond of interest, who have voluntarily joined together to achieve a lawful common social or economic end, making equitable to contribution to the capital required and accepting a fair share of the risks and benefits of the undertaking in accordance with universally accepted cooperative principle.
Zwitterionic Surfactant for EOR in Tight Carbonate Reservoir: Physico-Chemical Interaction and Microfluidic StudyDownload Recently, during surfactant aided recovery processes for unconventional reservoirs, carboxybetaine based zwitterionic surfactants (CnDmCB) have attracted attention due to their good tolerance to high saline produced water, remarkable water solubility, and ultra-low interfacial tension (IFT) at... This thesis presents the formulation of geo-mechanical design parameters using a combination of laboratory and insitu test results in the heavily overconsolidated cohesive till formation in the city of Edmonton. These data were then used in the design of the recently constructed North LRT twin... Window manufacturing process involves a series of operations including cutting profiles, welding profiles, corner cleaning, installation of hardware and assembling the other components that complete a functional window (such as jam extension, brickmould). The main bottleneck of the manufacturing... What Makes a Project Safe? Identifying the Impacts Factors Have on the Safety Performance of a Construction Site through Use of Artificial Neural NetworksDownload What makes a construction project safe? This question prompted this research project. The goal was to identify factors and quantify their impact on the safety performance of construction projects. The first step in achieving this goal was to research key performance indicators in the area of... Abstract Abundant hydrocarbon resources in low-permeability formations are now accessible due to technological advances in multi-lateral horizontal drilling and multi-stage hydraulic fracturing operations. The recovery of hydrocarbons is enhanced by the creation of extensive fracture networks.... Wettability Alteration by Chemical Agents at Elevated Temperatures and Pressures to Improve Heavy-Oil RecoveryDownload The thermal method is the primary method used to improve the recovery from reservoirs with heavy or extra-heavy oil. However, the efficiency and economic costliness of the traditional thermal process are limited by the unfavorable interfacial properties of heavy oil/water (steam)/rock system.... Ductile steel plate shear walls are an established lateral load resisting system. Past research indicates that cold-rolled infill panels less than 1 mm in thickness present one solution to an overstrength problem arising from selecting an infill panel thickness based on ease of welding and... We Used to Drink Our Water: Understanding the causes and consequences of boil water advisories in rural drinking water wellsDownload Access to safe, reliable drinking water in many First Nations in Canada lags significantly behind the access available in non-First Nations communities. This thesis explores the sources, pathways, and consequences of bacteriological contamination in drinking water wells in Samson Cree Nation. A...
IN 2017, Mark Zuckerberg gave the commencement address at Harvard University, his alma mater, which he famously quit in his sophomore year to create Facebook. Zuckerberg, the fifth richest person in the world, had previously contributed $100 million to corporate school reform in Newark, New Jersey, in a project launched in 2010 that particularly focused on replacing public schools with privately managed charters and weakening teaching as a secure profession. Zuckerberg’s privatization initiative in Newark was marked by its antipathy to democratic and community control over schools in a district suffering from historical disinvestment and the related ills of racialized class inequality. As Dale Russakoff details in The Prize, the restructuring of the Newark schools began with sham community information-gathering meetings that presented the public with a false sense of involvement. Secretly, Mark Zuckerberg, Newark mayor Cory Booker, and Governor Chris Christie had already planned the corporate school reform fate of Newark’s public schools. Zuckerberg was deeply involved with key figures and organizations across the political spectrum dedicated to the radical business-led transformation of public schools by privatizing ownership and control over schools, administration, teaching, and curriculum. Privatization in this context referred to redistributing control from the community, teachers, and teacher’s unions to superrich individuals and politicians. In his Harvard address in 2017, Zuckerberg discussed his second major foray into education, CZI, emphasizing how his “for-profit philanthropy” focused on online personalized learning that would create meaningfulness and purpose for students by providing them the “freedom to fail” as they become entrepreneurs. Zuckerberg acknowledged that his own financial successes were only possible due to the economic privilege he inherited that provided him financial support and security to take entrepreneurial risk. Zuckerberg spoke of a new social contract with a universal basic income and lifelong educational services to confront the coming joblessness and insecurity brought about by technological innovation, of which, he neglected to mention, he has been both contributor and beneficiary. Zuckerberg recognized in his speech the extent to which economic success of individuals depends on support and security, and he advocated expanding the safety net in the interest of fostering a culture of business entrepreneurship. Zuckerberg’s apparently liberal commitments to expanding the safety net are toward the end of a neoliberal vision of inclusion defined by markets and entrepreneurialism. Everyone should be supported to “fail up” aided by continuous education. And Zuckerberg’s companies, Facebook and CZI, will be there to supply the lifelong learning products. Yet Zuckerberg did not mention in his address that CZI, LLC was a for-profit education company. Instead, he described it as a charity. Zuckerberg and his wife, Priscilla Chan, had announced CZI on Facebook upon the birth of their daughter in 2015. At that time, they claimed that they were fulfilling Bill Gates’s “giving pledge,” which challenged billionaires to commit to give away the bulk of their fortunes. Zuckerberg and Chan alleged that they were pledging 99 percent of their Facebook stock, or $45 billion, to the new philanthropy. Yet, Zuckerberg and Chan gave their gift not to a philanthropic foundation but rather to their own LLC—an organization that shared staff, leadership, and a profit motive with Facebook. Whereas Facebook is a publicly traded corporation, CZI is privately held. CZI follows the lead of Steve Jobs’s billionaire widow Laurene Powell Jobs’s for-profit LLC Emerson Collective and billionaire PayPal cofounder Pierre Omidyar’s Omidyar Network. All three of these companies target public schools for profit and participate in the neoliberal restructuring of public education while framing their profit-seeking, union-busting, standardization-promoting, and anticritical activities as “philanthropy”—a gift to children, schools, and the public. All three promote themselves as “innovative” for selling technology products to public schools. CZI in several ways expresses a view of education that affirms the concentration of economic, cultural, and political power: it promotes a pedagogical format that appears to emphasize individual student control while undermining education as dialogic and democratic and shifting control over curriculum and pedagogy away from teachers and communities and toward for-profit corporations. CZI also functions as effective public relations for Zuckerberg and Facebook as they have come under fire for disseminating fake news during the 2016 presidential election and for live streaming murders and suicides. The first section discusses CZI’s redefinition of philanthropy and the importance of recognizing CZI primarily as a for-profit business. The second section discusses CZI’s “personalized learning” projects and how they displace teachers and the possibilities of democratic educational practices while creating the conditions for profit. Misrepresenting Profit as Philanthropy CZI has been widely discussed in the popular press for the controversial LLC form. Most accounts discuss the advantage to billionaires of retaining the greatest degree of control over the use of their money. The for-profit approach to “philanthropy” has been termed by some as philanthrocapitalism. However, there is vast confusion in the popular press and academia as to whether the term refers to the application of business ideals, corporate culture, and a private-sector approach to charity (as typified by nonprofit venture philanthropy foundations, such as Gates, Walton, and Broad) or to the declaration of a for-profit company itself as a philanthropy (CZI, Emerson, Omidyar). For-profit corporations cannot meaningfully be considered philanthropy. Unlike nonprofit philanthropic foundations, which must release tax reports on the use of money, LLCs can do absolutely anything with the money in them without disclosing what they do. Money in the LLC could be taken as profit, invested in educational projects, or moved to other for-profit businesses, such as Facebook, or arms dealing for that matter, and there is no way for the public to know. Money can be withdrawn freely, and unlike with nonprofits, there is no requirement that a portion of funds be used for the mission. Indeed, CZI could do nothing at all, and nobody would be the wiser. Furthermore, unlike tax-exempt nonprofit foundations, for-profit LLCs can conduct political lobbying and can do so secretly. CZI and other LLCs do not shelter income from taxes, but they can write off business losses and get tax benefits that way. As I detailed in my book The Gift of Education: Public Education and Venture Philanthropy, the early 1990s began a shift from traditional or “scientific philanthropy” typified by Carnegie and Rockefeller to “venture philanthropy.” Scientific philanthropy, which held sway in the twentieth century, was animated by industrial capitalists’ desire to support and expand the public sphere in part to bolster a meritocratic ideology. Public knowledge sources, such as libraries, museums, and schools, would allow individual citizens the means to acquire knowledge for private-sector development and public participation. Moreover, Carnegie was deeply worried about the growth of radical social movements, especially communism, and saw philanthropy as a means of undermining them by funding public institutions in ways compatible with the maintenance of elite economic and political rule. Traditional philanthropists simultaneously supported the expansion of public institutions and promoted the redistributed burden for social opportunity onto individuals who had better be willing to work hard to acquire knowledge and culture to compete for social ascendance. Giving, for scientific philanthropy, was largely publicly oriented and largely granted control over the uses of the money to the recipient public institutions. In contrast, venture philanthropy (VP), which began in Silicon Valley in the early 1990s, promotes a financial agenda of public-sector privatization and deregulation while remaking philanthropy on the model of corporate culture. The language of VP is derived from tech start-ups and venture capital. For VP, charity is positioned as an “investment” with outcomes termed “return on investment.” Money is to be “leveraged” and initiatives to be “scaled up.” Venture philanthropists endow their own nonprofit foundations, and get tax breaks to do so, but then largely retain control over the uses of the money. Most significantly, these uses involve privatizing public goods and services, such as schools, and promoting the ideology of corporate culture. For example, of the big three venture philanthropies, the Gates Foundation most aggressively promoted charter expansion, the Walton Foundation promoted vouchers, and the Broad Foundation promoted deregulating and privatizing educational leadership on the model of the corporation and database tracking projects. Venture philanthropists leverage their funds by often staging competitions among states, districts, and schools for scarce money. VP has created what I term elsewhere a “new market bureaucracy,” in which the public bureaucracy has been denigrated as hopelessly inefficient and yet new layers of privately controlled organizations, such as charter funding and lobbying organizations, have been rolled out at the local, state, and national levels. VP-style “leveraging” as a model of influence was adopted by the Obama administration’s signature Race to the Top program. It dangled money in front of states in exchange for aggressive efforts at expanding test-based accountability and lifting caps on charter expansion. There is an unvirtuous circle in which corporations and the superrich lobby to maintain low taxes that result in the inadequate funding of public infrastructure, and they lobby to get specific educational policies legislated that promote private-sector responses. Then venture philanthropists get tax benefits to put their money into foundations that they control to influence and shape the public sphere—often in ways that allow them to pillage and profiteer, such as getting for-profits into the public sector. Venture philanthropies have managed to strategically influence the use of public money toward privatization and corporatization projects while allowing the billionaire donors to have outsized influence over these initiatives. For example, two of the largest recent educational reform pushes, charter schooling and the Common Core State Standards, were so massively promoted by the Gates Foundation that it is unlikely that they would have been implemented nearly at the scale they have without it. In this sense, venture philanthropists have managed to hijack governance from the public with regard to the use and direction of public wealth and decision-making. VP ought to be understood as profoundly political in that it reallocates political control over public institutions and influences the priorities of them. As well, it transforms social relationships and culture in institutions in vertical and authoritarian as opposed to democratic ways, largely by promoting the ideologies of corporate culture. For example, public schooling has the potential to create the conditions for democracy by providing the knowledge and dispositions for collective democratic self-governance. Yet the accountability and charter movements aggressively promoted by venture philanthropists promoted rigid and repressive pedagogical approaches while shifting curriculum decisions to private control. VP has played a central role in delinking public schooling from its formative capacities in democratic society, fostering instead a conception of schooling for private training for work and consumption. While VP has de-democratizing tendencies in part by privatizing ownership and control over the public sphere, it is nonetheless philanthropy. In contrast, philanthrocapitalism, as typified by CZI, Emerson, and Omidyar, is mistakenly discussed in both mass media and academia as philanthropy albeit with greater secrecy and less transparency because it does not have to file tax forms disclosing its activities. Philanthrocapitalism in the form of the LLC should not be considered philanthropy but rather business. Writing in the New York Times, Singer and Isaac announced CZI with the headline “Mark Zuckerberg’s Philanthropy Uses L.L.C. for More Control.” They went on to explain the advantages to the LLC form for Zuckerberg, including for-profit investment, political activity, fewer rules, and no disclosure. They conclude, “In all those ways, the L.L.C. acts more like a private investment vehicle for the couple.” While they are right that CZI acts like a private investment vehicle, it is impossible for anyone ever to find out what CZI really does in terms of giving or business. It is a mistake to describe CZI as philanthropy. The British medical journal The Lancet warns, “When it comes to the Chan Zuckerberg Initiative, the public may never know—and we have no legal right to know—whether any of the promised charity will actually go to charity at all.” Similarly, attorneys Dykes and Schwartz, writing in the journal Trusts and Estates, point out that “the donor has near total flexibility to change the LLC’s mission or projects at any time and has wide latitude to engage in self-dealing. . . . And because the promise to contribute to the LLC is unenforceable, the founder needn’t even follow through on the plan of funding the LLC at all and can unfund it at any time.” CZI’s secrecy and privacy need to be seen in the context of the earlier Zuckerberg corporate school reform activities in Newark. Zuckerberg sought to reform schools while circumventing public participation in policy enactment. He was criticized for actively excluding the public from planning. Furthermore, the reforms themselves, such as the chartering, faced public criticism. The LLC form appears to be intended to specifically circumvent public accountability. However, there is a serious question as to whether CZI functions philanthropically at all or whether its activities are only profit seeking and “philanthropy” is a label intended to project an image of “corporate social responsibility.” VP already significantly redefined charitable giving by eroding the distinction between private good and public good. Nonprofit foundations like Gates, Walton, and Broad promote the idea that public problems are ideally addressed through private-sector solutions, that only the rich can save the poor, that the private sector is always more efficient and less bureaucratically encumbered than the public sector, and that public institutions, such as public schools, ought to be privatized and corporatized for their own good. Perhaps most insidiously, VP contributes to a neoliberal reimagining of obligation in which private individuals are only responsible to themselves but not to the other members of the public. The LLC corporate form of CZI continues yet goes beyond VP’s efforts to celebrate private dominance over public goods and services. Philanthrocapitalism represents an effort to collapse the distinction between public and private spheres and between profit seeking and charity. For example, a number of writers assume that for-profit “philanthropies” need to show both financial returns and “social returns.” Because LLCs are not required to reveal their finances or activities, the distinction between these activities could not be ascertained. Nonetheless, giving the LLC the benefit of the doubt, these imperatives for financial and social returns are often in conflict, as they are with any business corporation, and the pursuit of profit comes at the expense of social returns, that is, the public interest when the survival of the business is at stake. Zuckerberg largely profits through providing free content generated by other users and selling advertising on that volunteered content, while most philanthropy supports nonprofit causes that are intended to serve the public interest. These contradictory aims of wealth extraction and public purpose become glaringly obvious with CZI’s central project of personalized learning and data extraction and its aims of replacing teacher labor with automated programs. CZI’s “Personalized Learning” Projects Some of CZI’s for-profit educational investments follow a profit model of pay-for-fee services. CZI invested $50 million in the BYJU’s (Think and Learn Pvt. Ltd.) for-profit educational app that teaches mostly math, science, and test preparation, with its largest audience in India. Parents pay for the courses and subscription. This represents a development of neoliberal education that the World Bank has been promoting for years—pay-for-fee privatized education instead of the development of free universal public education in developing countries. It deepens the educational privatization that Chubb, Moe, and other U.S. rightists have been advocating through the use of technology, namely, “unbundling” the school and treating each piece of what a school does as a commercializable service. CZI is investing in for-profit educational technology companies like BYJU’s, whose primary goal is the accumulation of wealth by selling to end users and public school districts and cutting costs to maximize profit. The profit motive results in an effort to extract money from the educational process by maximizing the amount of money taken in by the company and minimizing business expenses for labor, materials, and other overhead. BYJU’s appears poised to compete globally for educational spending, as following the CZI investment, it purchased Edurite and Tutorvista from educational for-profit giant Pearson. The big take in these purchases was not so much these companies as their databases of students, including millions of U.S. students. The acquisition of student data is at the center of personalized learning as a business. Personalized learning itself developed out of commercial marketing applications. As an announcement of CZI’s $50 million investment in for-profit education company BYJU’s puts it, “personalization is a theme long pursued by consumer internet companies, especially e-trailers, who offer consumers a bouquet of choices to pick from based on their past shopping or browsing history. This in turn increases a consumer’s engagement on the platform and significantly increases probability of a purchase.” While there is no widely agreed upon definition of personalized learning, proponents allege curriculum and pedagogy to be relevant to and tailored to students’ interests. However, personalized learning has come to be associated with educational technology products that are designed to standardize and “deliver” content regardless of the specificities of the student, the teacher, or the context. Summit and the Trade in Student Data CZI bought Summit Charter School Network and has been promoting the implementation of Summit personalized learning technology in public schools around the United States. Summit is connected to advertising-driven Facebook, using software engineers to develop its online education platform. While Summit is initially given to schools without a fee, like Facebook, it collects user data. The connection to Facebook raises troubling school commercialism implications regarding the imposition of advertising in the classroom, the uses of private data, and the nonconsensual gathering of private, personal information from minors. Unlike its investment in BYJU’s, CZI’s personalized learning project Summit does not profit by providing a fee for service. Summit enters use agreements directly with schools and hence becomes a part of the required curriculum. Facebook profits in part through advertising and in part by selling user data. While the pilot Summit Basecamp program is being rolled out to public schools without charge, just like Facebook is provided to users “without charge,” critics raise privacy concerns because, like Facebook, Summit collects valuable user data, that is, student data. The agreement between Summit and the participating school states that Summit may use student data to develop educational services. Since Summit is part of a for-profit company, CZI, LLC, this means that the data could be used to develop other for-profit educational services within CZI’s umbrella. Facebook is an advertising company. Most of its profit comes from using volunteered data to sell ads targeting its users. A personalized learning curriculum sets the stage for a new kind of profit taking by educational investors in the form of data mining. Just as Facebook uses and sells data from its users to advertisers, the data that can be collected from a young school-based “captive audience” not in a position to refuse it could be worth a fortune. CZI’s private for-profit form and the interrelations of for-profit and nonprofit entities and sharing of commercializable information raise a number of troubling questions: Will CZI finance its educational activities through the commercialization of its users’ data, as Facebook does? Will the content in Summit applications embed advertising? Will Summit deliver users to Facebook, which will deliver them to advertisers? Will Summit use taken data and put them through big data models to derive commercially valuable marketing information? There appears to be nothing in the user agreement or the laws structuring CZI as an LLC that would preclude these commercial possibilities. Moreover, due to the LLC form, one can never know the answers to these questions. Consequently, the public ought to assume that the answers to these questions are all yes unless proven otherwise. The Cambridge Analytica scandal that emerged in 2018 revealed that as early as 2015, Facebook and Zuckerberg were aware that Cambridge Analytica was secretly taking Facebook users’ data and using them to advertise political campaigns, such as Trump’s 2016 presidential campaign. The taking of 87 million people’s data without their knowledge or consent to be used to manipulate them politically highlights more than the questionable judgment of CZI and Zuckerberg. It also showcases how private companies with a vested financial interest cannot be trusted to regulate themselves when it comes to the responsible use of private information. The Cambridge Analytica scandal shows that Zuckerberg, Facebook, and CZI ought to be forced to open Summit and all of CZI’s activities to public accountability, oversight, and regulation or face expulsion from public schools to protect children’s legal rights to privacy. Among the most troubling aspects of CZI’s expansion of Summit is the effort to replace teachers with machines and to shift the control over knowledge from teachers to a for-profit company. CZI has been promoting “personalized learning,” “software that puts children in charge of their own learning, recasting their teachers as facilitators and mentors.” Zuckerberg has explained the vision for how personalized learning works in a classroom: “Students cluster together, working at laptops. They use software to select their own assignments, working at their own pace. And, should they struggle at teaching themselves, teachers are on hand to guide them.” On the surface, this vision for personalized learning appears to counter some of the worst aspects of the past two decades of excessive standardized testing, the central feature of the so-called accountability movement. It replaces the single high-stakes test with the possibility of taking a test as many times as is necessary to check “mastery.” However, rather than getting rid of the accountability movement’s legacy of overtesting, which has been opposed by the antitesting Opt Out movement, Summit’s program appears to build testing more deeply into the curriculum and worsens the problems of teaching to the test. Orienting learning around constant testing and teaching to the test displaces dialogue in favor of a monologic form of learning. Such a model of pedagogy displaces dialogue, curiosity, investigation, interpretation, judgment, and debate in favor of transmission through passive absorption. Moreover, standardized tests obscure the social and ideological positions of the makers of the tests and the takers of the tests. As a consequence, knowledge appears disconnected from its conditions of production and the symbolic and material contests that inform claims to truth. CZI’s expansion of standardized testing is more extensive than its integration into personalized learning. In spring 2017, CZI partnered with The College Board, which sells the PSAT, SAT, and Advanced Placement tests, to expand adaptive learning test preparation in school. CZI does not appear to be committed to ending or replacing standardized testing as a gate to academic promotion and economic inclusion so much as it wants to enter into pay-for-fee services in an ever more competitive testing climate. Proponents of personalized learning claim that in opposition to competitive individualized learning, personalized learning emphasizes collective learning with students clustered around tables together. Zuckerberg and Summit CEO Diane Tavenner liken this collective learning to the corporate culture of a tech company. According to Zuckerberg, “it feels like the future—it feels like a start up.” And says Tavenner, “It looks more like Google or Facebook than a school.” Empirical evidence of traditional measures of test-based achievement do not exist to support the claims of proponents of personalized learning. Such traditional measures of test-based achievement fail to address a more significant matter that bears on the personalized learning debate, namely, how it impacts the ways that teachers and students address the politics of the curriculum—that is, contested knowledge and meanings and the ways that these contests are connected to broader material and symbolic struggles. CZI’s promotion of “personalized learning” aims to replace teachers with machines, to replace dialogue between teachers and students with student use of programmed software. It replaces the meaning-making work that teachers do in classrooms, which is often contextually based, with a decontextualized software program. Promoters of the software suggest that, by being adaptive to students, it is “responsive” and “relevant.” Providence, Rhode Island, school superintendent Christopher Maher, who is overseeing one of the most extensive implementations of personalized learning with CZI, stated, “Personalized-learning technology helps students choose subjects they find personally meaningful and culturally relevant.” However, whereas teachers can link knowledge to student experience to make learning relevant, such software cannot. Furthermore, teachers can go beyond linking knowledge to experience by problematizing how knowledge is produced and how its production relates to the particular social environment experienced by the student. To give an example from Maher’s district, how might a working-class student who is African American or Latino use personalized learning software to make sense of the experience of the extreme racial and class segregation that currently structures the city of Providence and the racialized and class-based pedagogies of repression that currently organize its public schools? Whereas a teacher can engage in dialogue with this student about the history of that reality, the competing ways of theorizing and interpreting that reality, the curriculum software cannot. As CZI moves to displace teachers with technology, it repurposes teachers as “mentors.” As mentors, teachers are not “deliverers of content” but rather take on the role of fostering “cognitive skills.” In CZI’s vision, there is an artificial split introduced between “content” and “thinking,” as if one can think without content. As Paulo Freire emphasized, teachers are always more than facilitators or “mentors.” A teacher’s pedagogical authority is always exercised whether the teacher intervenes to question the content or silently affirms it and tacitly endorses it. “Personalized learning” is not just a misnomer; it is an oxymoron in that it depersonalizes learning as it removes learning from the subjective realities of students’ lives, the objective realities of the broader context for learning, and the dialogue between teachers and students. By depersonalizing learning, such software also denies the relationships between claims to truth and the ideological positions and social interests that often animate claims to truth. In this sense, CZI’s educational activities run contrary to the aspirations of democratic education that aims to promote dispositions of curiosity, dissent, and dialogue and that links knowledge to power such that knowledge can form the basis for social agency. Instead of rewarding students with understanding their potential to act on the community and the world, personalized learning software often rewards learning with video game–style extrinsic rewards. A key feature of “personalized learning” is what Garrison et al. has referred to as “the Netflixing of education,” or the implementation of adaptive software and data analytics. Adaptive software works like Netflix to predict and suggest the direction of lessons. Data analytics compile and quantify the student’s use of the software, measures her use with tests, and provides the teacher with data that are supposed to inform the teacher’s mentorship. As I have discussed elsewhere, the expansion of adaptive learning technology and its informatization is troubling because it provides a prescribed path of curriculum and creates a longitudinal case about the student over the course of a year and from year to year. Because the “data” about the student’s activities are numerically quantified, these activities falsely appear as neutral and objective. As well, because the student chooses some activities and to some extent the pace of use, the program appears to foster individual control over learning despite the fact that the lessons are prefabricated and standardized. Adaptive learning software sorts and sifts students, promoting some and punishing others. Like standardized testing and the standardization of curriculum, such sorting and sifting are informed by a cultural politics that the software does not make explicit to its users. Hence the curriculum of adaptive learning software puts forward particular class and cultural group values, ideologies, and subject positions while falsely wrapping these cultural politics in a guise of disinterested universality and objectivity abetted by the ideology of technology. Personalized learning software recontextualizes knowledge in ways that are viscerally stimulating but remove the skill acquisition from any meaningful individual or social use. Take, for example, Netflix founder Reed Hastings’s Dreambox Learning: Dreambox takes elements from animated video games, with some math lessons populated by aliens that whoosh and animals that cluck. When students complete a math lesson successfully, they earn points that they can use to unlock virtual rewards. Learning math in this case becomes not about learning tools to understand the self and the social world to be able to act on it. Math in this case is a desocialized skill linked to meaningless extrinsic rewards and frivolous entertainment. The point is not that there is no place for frivolous entertainment but that the pedagogical approach favored by personalized learning disconnects skills from their social meanings and possibilities. Compare this Dreambox example to a lesson developed by critical math educators who, for example (to use a former Chicago colleague’s lesson), teach Latinx youth fractions by doing driving while brown/driving while black lessons. In this case, the experience of these youths being targeted by police was analyzed with fractions, furthering the comprehension of the experience. The lesson then became the basis for a community action project to challenge racist police practices. This example highlights the crucial difference between what relevant and meaningful learning can be about when contextualized socially and made relevant to the individual student experience. There is a false debate found in both the popular and academic literature about personalized learning. Proponents, such as Maher, falsely portray it as relevant and meaningful while celebrating these supposed virtues against transmission models of pedagogy. Critics of personalized learning, such as Benjamin Riley, defend the role of teachers as imparters of knowledge that can be accumulated by students. Both positions fail to comprehend the cultural politics of knowledge making in schools. Such proponents of personalized learning are now appropriating from traditions of critical education the rhetoric of meaningful and relevant learning while promoting forms of learning that are neither. Personalized learning is depersonalized by decontextualizing knowledge from its conditions of formation. The transmission model that sees the teacher as imparter of knowledge treats knowledge as static and the student as an empty vessel. Both positions miss the way that knowledge is co-created through dialogic exchange between teachers and students. As Stuart Hall emphasizes, culture is made through meaning-making practices through exchange, albeit in always unequal ways. Pillaging Teacher Work for Profit According to Dale Russakoff in The Prize, Zuckerberg’s central focus on school reform in Newark involved changing teacher work. Russakoff writes that Zuckerberg saw the teacher workforce as a problem and was interested in making teaching work attractive. What Russakoff does not explain is why, then, Zuckerberg embraced neoliberal educational reforms that are thoroughly off-putting to prospective teachers by transforming teaching from a profession with professional security, autonomy, and control over pedagogical and curricular decisions to an insecure job subject to intense surveillance, micromanagement, punishing test-based accountability, and downward pressure on pay and benefits. Specifically, Zuckerberg in Newark was onboard with Booker and Christie’s push for union busting, chartering, and value-added modeling. CZI continues Zuckerberg’s long-standing focus on framing the teacher and teacher work as problems that need to be overcome. The different aspects of CZI’s educational activities hinge on an allegation that teachers are “unaccountable” and need to be made accountable. This was evident in Zuckerberg’s Newark activities that positioned the teacher workforce as both problem and solution. It is also evident in CZI’s project of replacing teachers with machines. According to Zuckerberg, what will allegedly produce accountability is students’ activities on the software, which can be numerically quantified and will represent real learning. Students are to be accountable to the software, which is made elsewhere by experts. Yet students are not to be accountable to the teachers who play a new role in CZI’s vision no longer as teachers but as “mentors.” As Zuckerberg’s educational solutions demand numerically quantifiable accountability from his product, Zuckerberg’s own educational activity has been and continues to be unaccountable to the public due to the secrecy afforded the LLC structure. CZI’s plan to replace teachers with a “student-centered” approach is not a new privatization strategy. When Chris Whittle announced his plans for the Edison Schools, now Edison Learning, a for-profit company that manages schools, he described a seemingly progressive vision in which fewer teachers could be used as students taught students. This labor and cost saving plan was soon enough scrapped by Whittle and replaced with a radically rigid and standardized approach that involved having every student in every Edison School learn the same thing at the same time. The earlier example of Edison is significant for how, in the history of corporate school reform, the celebration of student freedom from teachers has been about reducing the overhead of teacher labor to maximize profits by reducing the single biggest element of school overhead. In the case of CZI, the shift of money away from the teacher workforce and toward for-profit companies that develop curriculum software and adaptive learning technologies represents a redistribution of public wealth from public employees to private ones. It also represents a shift in control over who decides what should be taught and how it should be taught. The media scandal over Facebook’s editorial role in overseeing its newsfeed following the 2016 presidential election has great relevance for this question of curricular control. Following the discovery that a large quantity of fake news stories during the 2016 presidential election were circulated through Facebook, the company’s editorial oversight came under scrutiny. This scrutiny revealed that a small number of people at the company were making decisions about what news to send to users’ Facebook pages. News, like curriculum, always comes from a particular vantage point with particular assumptions. Personalized learning promoted by CZI denies and obscures the human and inherently political practices of meaning making by disappearing those making the curriculum and disallowing dialogue between students and those making claims to truth. In this sense, CZI’s version of personalized learning is antithetical to democracy itself. Henry Giroux makes the point that education is always implicated in both politics and the kind of society people are collectively forming: Education [is] important not only for gainful employment but also for creating the formative culture of beliefs, practices, and social relations that enable individuals to wield power, learn how to govern, and nurture a democratic society that takes equality, justice, shared values, and freedom seriously. As CZI works rapidly to expand as an education business disguised as a philanthropy, citizens need to comprehend the broader social and political stakes in the expansion of standardized testing and standardized curriculum, destruction of the dialogic relationship between teachers and students, redistribution of educational resources, and decision-making about pedagogy and curriculum from the public and teachers to the corporation.
Inviting the Reader: Narrative Values, Lyric Poems by Sydney Lea The editor of an online journal recently asked 25 poets to complete the following in one sentence: “Poetry is…” Here’s what I wrote: “Nowadays, poetry consists of units of language that their authors call poems, and can range from conventional forms to prose poems and include anything in between.” I meant, obviously, that no one definition of poetry is available in our age. But was it ever? The claim, That’s not poetry, which some make when, say, a poem does not rhyme, reminds me of the history of….poetry. What would John Milton, who came to hate rhyme, by the way, calling it “the invention of a barbarous age”– what would he have thought of fellow non-rhymer Walt Whitman, for example? What would Milton have thought even of William Wordsworth? What would Wordsworth have thought of Emily Dickinson? What would Dickinson have thought of Ezra Pound? And so on. I begin in this way just to indicate that nothing I say tonight has doctrinal value. I try never to make the claim That’s not poetry, simply because there have likely always been almost as many different notions of what poetry is as there are poets. So I have no sweeping take on what poetry may be. Even if I did, however, it wouldn’t be very important. What I’ll really be talking about tonight is my own practice as a writer. And my practice is my practice not because it represents the right way but because it’s my practice, if you’ll allow me some circular logic. I don’t want to make what I can do the measure of virtue in poetry and therefore imply that anything else is a vice. That itself would be a vice called narcissism. It would also exclude any number of my favorite contemporary poets, maybe in fact most of them, from my inner circle. And yet…. And yet what would any of the poets I mentioned make of the following poem in a recent New Yorker? What do you make of it? A Ship’s Whistle Years passed and I received no letter with the word “trombone.” The distant cousins wrote, offered their shriller sympathies. “What’s wrong with us?” Nothing I knew. Plugboard and Isinglass, grimoire and cwm, friends all. Still I felt horribly alone. Until one day it dropped through roundel light onto the mat. I was tearing my dictionaries of hope–who, why, and what– aApart when it sounded, that note pressing for home. Trombone. And fearing it a dream was like waking in the wrong room, not daring to believe in your return, or having come to my senses after sickness. Veneer, mirror, and comb: objects that shivered as relief swelled under them, they drew lots to be turned to words which, soon as said, I knew were brass. Years sliding past alone until– avast!– Trombone. Well, I have no idea what those poets would make of that. Me, I can’t make head nor tail. Behind this poem, I suspect, lies some theory or aesthetic to which I am not privy, and to which I don’t find myself much moved to be privy. In fact, I wonder if the writer himself can have been MOVED to write this. I certainly can find nothing moving in it, and, as I recall Maxine Kumin wondering in conversation, “what’s the point of a poem with no feeling at all?” But no, I can’t legitimately argue that the trombone poem is not poetry, for fear, again, of the narcissist’s presumption. But I want to speak of narrative values in lyric poetry not merely because I employ them but simply because I think they can help us avoid the sort of density, even impenetrability, of the poem I just read– the sort of writing, as I see it, that can give poetry a really bad name even among people who read a lot, who come to libraries and good bookstores like this one. My friend Garret Keizer tells me that when the 1973 Arab-Israeli war broke out, Israeli conscripts rushed home to get their rifles, which they were obliged to own, and copies of poetry collections by the great Yehuda Amichai. That is an extreme example, of course, but perhaps illustrative: I find it impossible to imagine anyone’s running home for copies of “A Ship’s Whistle” before going into combat. Before I proceed further, I should tell you that many younger editors are all but militantly anti-narrative. At 76, then, I’m doubtless quaint, and becoming more so in ways that I don’t even recognize, but that those younger people will. So be it. I’m old enough not to care what the smart and hip people think. As the title of my presentation suggests, I want poems that invite readers in, rather than ones that exclude them, and this is what in my opinion narrative values can help us with. Mind you, I say narrative values, which is not the same as talking about writing narrative poetry– with actual plot, beginning, middle, and end. I have written a certain amount of such poetry, and I have surely enjoyed it in other writers, all the way from the extraordinary epics of Milton and Spenser to much of Frost’s longer work to Ellen Bryant Voigt’s Kyrie to a lot of Sharon Olds’s poetry to the matchless story-poems of B.H. Fairchild to fabulous things by Maine’s poet laureate, Wesley McNair (on whom more later). Let me read another poem before I try to be more specific about this matter of narrative values, especially as they may apply to non-narrative, lyric poems: Not since I was four or five at most And in the first of so many striped tee shirts Have I been this close to the flavour of safety. I’m walking into town again, the child of hills. You bought me fish and chips for lunch, my own Adult portion because I asked for it, in Evan’s Tiled restaurant, the Alhambra of takeaways. Fine living robs the faculties of right judgment; I turned, lost sight of you that afternoon in M&S. Gone, and the unworn self at once puts on habits Of wandering. (“Have you seen my … ?”) They stood me on a counter. You appeared And recognition bore away the riderless hoofbeats Of fear. Pride claimed me, later, when you praised My instinct to be visible, which soon became The need to be noticed, a confused stage, A knowingness that wasn’t what you’d meant At all! You were relieved to see I’d asked for help, Could be that lost and, knowing it, be found. My deep-sea stripes helped you spot me, Their clumsy colors sliding past, today, in town, The blue and brown and silverflash of cars Like keys to some fastness. High ground. Without pausing for any detailed analysis, I’ll simply say that I’m much taken by this poem, which belongs to an Englishman named Will Eaves, who –astonishingly– also wrote that poem about the trombone– or about something. I must say that I wonder what happened between 2011, the date of this poem, and 2015, the date of the trombone poem? What could have motivated the change from this manner to the manner of the one I led off with? I can’t answer that, and it’s entirely possible that I miss the point of the later poem. But, so far as I can see, there are certain purely factual issues missing from that later one. Despite the author’s reference to his “dictionaries of hope–who, why, and what,” who, why, and what seem conspicuously absent from what he presents. Here’s where, for me, those narrative values come in, the values of conventional short fiction, ones that my generation learned about in grammar school: character, plot, and setting. In any case, there are a few basic questions I pose to any draft I produce, and they are much related to character, setting, and plot. Who’s talking? To whom is he or she talking? Where is the speaker on delivering the words we read? Answers to these questions in the trombone poem are impossible for me to find. The speaker, whatever strange beast he is, could be anywhere as he waits for a letter containing the word “trombone,” though why he should desire such a letter is mysterious in the first place; so I lack a sense of setting. Who the speaker is remains totally unclear; I have no sense of character. I want that sense, and if I write or read a poem in the first person, I want the character named “I” to show some characterological qualities too– I am not interested in the thoughts or feelings of a mere pronoun. In this case, if the poem were unattributed, just for trivial example, we could equally imagine its speaker to be female as male. If I can’t even be sure of something so apparently minor as that, how may I be expected to know what my students have always called “deeper meanings”? Who, why, what, where? If my poem can’t answer at least some of these questions, I feel I need to work on it further. As the saying goes, I want to know, in my poem or someone else’s, what’s the story here? Not knowing that, not knowing, as my generation used to say, where the speaker is coming from, I feel not invited in but– almost intentionally it seems– excluded. I feel a need to know some encoded language, or some other secret, in order to enter the poem. Lacking the knowledge of who-what-where-why, etc., I turn away– and that, for me, is about the last effect I want to have on my reader. I don’t need a plotted story per se, but I want those seemingly factual issues to be as clear as possible. Does this mean that as writers we need to “dumb down” our poems? Scarcely. Poetry by its nature is engaged with complex feelings and thoughts. But there is a vast difference between complexity and mere complication. In my view, it’s the distinction between the poem about the child’s getting lost in Marks & Spencer and the poem about waiting for the word trombone. It is also the difference, to my taste, between Robert Frost, one of the most complex minds in our literary history, and Ezra Pound, one of the most complicated. If as poets we present complex material, we are already challenging our readers to pay very close attention. There is no speed-reading of poetry, no scanning for story. Why would we want to expand that challenge to include what I have called simple facts: who, what, where, why? Again, the presentation of such facts is what I call the invitation to the reader. To use a famous example, that reader may say, “This is tough stuff to sort out, but at least this much I can know”: Two roads diverged in a yellow wood, And sorry I could not travel both And be one traveler, long I stood And looked down one as far as I could To where it bent in the undergrowth. We have a starting point for the famous instance I use here. A sort of door has been held open to us as we move into a poem that, though often reduced to a mere pep talk about self-reliance, is in fact profoundly ambiguous and finally even mysterious, perhaps like all good lyric. No, the inclusion of narrative values does not dumb poetry down. In fact, the introduction of such values is often very subtle, even oblique, rather than in-your-face or simplistic. Consider this poem by James Wright: Autumn Begins in Martins Ferry, Ohio In the Shreve High football stadium, I think of Polacks nursing long beers in Tiltonsville, And gray faces of Negroes in the blast furnace at Benwood, And the ruptured night watchman of Wheeling Steel, Dreaming of heroes. All the proud fathers are ashamed to go home. Their women cluck like starved pullets, Dying for love. Their sons grow suicidally beautiful At the beginning of October, And gallop terribly against each other’s bodies. This poem is clearly a first-person commentator’s indictment of a social structure that is ruinous, and whose priorities are badly skewed. Right? Well, not so fast. Let’s go back to those basic narrative questions. The very first line of the poem indicates that the speaker himself is in the Shreve High stadium. He is a participant in what he seems outwardly to condemn: a sterile and fragmented social world (even the night watchman is a ruptured one), in which ruined adults heroize the sons who will soon join them in this erosive context. And yet there is something beyond mere violence and rupture in what the speaker beholds; there is also something ennobling, a fact that leads to the imaginative genius of the phrase “suicidally beautiful.” Not suicidal, period. Not beautiful, period. No, both at once– which tells us a great deal about the man whose voice we hear in the poem. He is simultaneously morally taxed and enthralled. So, as you can see, the character-setting-plot complex, the who’s-talking-to whom, where, and why complex– this can be, and usually is, implicit rather than explicit. It usually leads to a sort of open-endedness as in “Autumn Begins in Martin’s Ferry, Ohio.” Here’s a further example, by a dear friend, the former poet laureate of Delaware, Fleda Brown; it is brand new, she tells me, and thus far unpublished, It is obviously, a sad story, and one familiar to any of us who has said goodbye to a strong-willed parent, one who stirs in us both affection and resentment. What I Should Have Said Crossing back on the Mississippi bridge, its harps lilting away, small islands whispering shades of gray, and the trees, banks, frozen river, all textures of hoarfrost. What I should have said to my father was, sure, call me Cordelia, old fart. Facts. Stick to facts, you taught me. There is no God. What I know of you is ancient, and hard, lacking this recent sentimentality. Stone in the road, stone in the throat. Back when all should have been possible, but wasn’t, wasn’t! Oh how we suffer, how we all suffered, with it. A lifetime of watching you build a boat to sail away in. Shall I now be the one to watch over you as you navigate the ashes, shivering thin? I didn’t say the most important thing, did I? Suspended, this pageant of anguish. When I was twelve, I flew into a rage, stomped out to the yard. You typed a letter, folded it into a paper airplane, flew it out to me. I only remember it said you didn’t understand me at all, but you loved me. Did it say that? Was love part of it? I mostly remember the distance, the trembling anger, that nothing, nothing would bridge between us, that I would love and hate you forever, equal portions, taking care not to love so much I’d never escape. That is, to repeat myself, complex stuff, but, to quote the poem, much of what Brown does here is “stick to the facts.” The point of view, the circumstances, even the physical ones, are clear, though its subject cannot be reduced to one “meaning,” or “message.” As Frost, our first Vermont state poet, once said, “If you’re looking for a message, call Western Union.” Here is another poem that greatly moves me. It is by Wesley McNair, poet laureate of Maine, whom I mentioned earlier: For My Wife How were we to know, leaving your two kids behind in New Hampshire for our honeymoon at twenty-one, that it was a trick of cheap hotels in New York City to draw customers like us inside by displaying a fancy lobby? Arriving in our fourth-floor room, we found a bed, a scarred bureau, and a bathroom door with a cut on one side the exact shape of the toilet bowl that was in its way when I closed it. I opened and shut the door, admiring the fit and despairing of it. You discovered the initials of lovers carved on the bureau’s top in a zigzag, breaking heart. How wrong the place was to us then, unable to see the portents of our future that seem so clear now in the naiveté of the arrangements we made, the hotel’s disdain for those with little money, the carving of pain and love. Yet in that room we pulled the covers over ourselves and lay our love down, and in this way began our unwise and persistent and lucky life together. In a review of McNair’s The Ghost of You and Me (2006), the late, great Philip Levine admired the poet’s “many skewed and irresistible characters who manage to get into odd situations for which there is only one remedy: to persevere. … He strikes me as one of the great storytellers of contemporary poetry.” I think that inclination to story –perhaps especially in the physical details that the poet so cannily remembers and turns to his use—is patent in this quirky and wonderful love poem. One gets a very clear sense, for one thing, of that character named I. You can find the same qualities in his latest, and perhaps his best collection, The Lost Child, which, a bit as with Fleda Brown in the poem I quoted a moment ago, involves saying good bye to a valiant, beloved, and profoundly difficult parent– in McNair’s case, a mother. So values of story, whether explicit or, more often, implicit, manage what I call inviting the reader into the poem. But I think there are other things that those values can provide a poet. First, and perhaps most essential in my view, the obligation to provide narrative content serves as a brake on mere self-involvement, or –to use my term of the day—narcissism In the late seventies, I read an essay by the fine poet Brendan Galvin on poems he called “Mumblings,” in which an unidentified first person “tries to tell the reader how he ought to feel about the nonspecific predicament of another, often unspecified person.” Yes, I thought, or else the poet addresses an unidentified second person about the cloudy difficulties of his/her relationship with that second person, or yet a third, also unidentified. I, too, disliked the verse Galvin attacked, especially for its evident assumption that a poet, merely by so designating him- or herself, could lay claim to an interesting inner life, because, after all, a poet is ever so “sensitive.” Surely, I believed, an “I” was interesting only if he or she proved to be, which meant among other things that he or she must cogently reveal at least a hint of identity in the writing itself. Not that all ’70s poets mumbled in the way Brendan had mocked. Some relied on image, whether deep or shallow, plain or surreal. And yet these writers, too, seemed often to exclude me from their work’s deeper resonances. Just as I was expecting some authorial commitment, a poet would turn to notice, say, a pigeon carrying a snip of someone’s necktie through a raincloud, or whatever. Subject matter, so to speak, never quite came out in public. Image is crucial to poetry, of course: it is at the root, etymologically and otherwise, of “imagination.” Much, much good and even great poetry has relied on little else; but if you have any yen for testimony in a poem, it is important to remember, as my old friend Stanley Plumly once noted, that the image has no voice. What I am suggesting is that a lyric poem ought to have subject matter, and that that subject matter should extend beyond the joys and frets and desires and worries of one solitary self. This is true even– no, in my opinion it is true especially—in first-person poems, too many of which, it seems to me, are vaguely akin to certain messages sent out on Twitter. Frankly, I really don’t give a damn that you’re well pleased, say, to have made a money-saving purchase at Costco. I am not, to stay with the technological analogy, interested in your selfie. I have found narrative to be a check on my own narcissism, which is as pronounced as anyone’s. But here’s another opportunity it can provide, something more positive, and to me more intriguing, though it may not be, and need not be, for everyone: it can open my poems to testimony. I could also call it rhetoric, in the classical sense: the language of persuasion, of argument, even of abstraction, which we are too often witlessly warned against. Yet why should rhetoric, so understood, be a goal at all? I’ll answer indirectly. Having not only conducted but also visited many a workshop, I’ve often noticed a kind of anti-rhetorical rhetoric among teachers and participants, implicit in the mantra, “Show, Don’t Tell.” Over and over I’ve heard this buzz phrase; and I’ve been thankful that most of our more revered poets have never heeded it. Consider The ceremony of innocence is drowned; The best lack all conviction, while the worst Are full of passionate intensity. Even though Yeats himself polarized rhetoric (which he said came of our quarrels with others) and poetry (made of quarrels with ourselves), he went ahead and composed that great sector of “The Second Coming,” which is rhetorical to the core. Or, for further example: 1. Getting and spending we lay waste our powers. Little we see in Nature that is ours …; 2.Beauty is Truth, Truth Beauty That is all ye know on Earth And all ye need to know… 3. Publication — is the Auction Of the mind of Man – Poverty — be justifying For so foul a Thing…. 4. Time present and time past are both perhaps included in time future … The list could be endlessly extended, but whatever our tastes, few would think immediately to send the authors of these “telling” lines– Wordsworth, Keats, Dickinson, Eliot– back to workshop for the sin of too much telling. The plain fact is that to banish rhetoric, to banish telling, to banish abstraction from lyrical poetry– that would be to banish some of the most memorable lines in our poetic history. Let me close, as I am apt to do, with Robert Frost, and to a poem whose rhetoric is easy to hear, since that’s almost all there is: “Provide, Provide” is chiefly persuasion, verging even on preachiness. The witch that came (the withered hag) To wash the steps with pail and rag, Was once the beauty Abishag, The picture pride of Hollywood. Too many fall from great and good For you to doubt the likelihood. Die early and avoid the fate. Or if predestined to die late, Make up your mind to die in state. Make the whole stock exchange your own! If need be occupy a throne, Where nobody can call you crone. Some have relied on what they knew; Others on simply being true. What worked for them might work for you. No memory of having starred Atones for later disregard. Or keeps the end from being hard. Better to go down dignified With boughten friendship by your side Than none at all. Provide, provide! If you’re counting, you’ll see that this poem offers four short lines of showing – The witch that came (the withered hag) To wash the steps with pale and rag, Was once the beauty Abishag, The picture pride of Hollywood, and then it offers seventeen lines of pure telling. Not a trace of image, none of metaphor, not a single “poetic” device whatever, except of course for rhyme and meter. But note that those opening four lines, the lines that show, constitute a narrative, however minimal: famous beauty becomes crone. Period. On the strength of this short-short story, the author moves straight to rhetoric, which begins as wry musing and, though Frost doesn’t purge the wryness (he rarely does, in any poem), concludes as realist argument: No memory of having starred Atones for later disregard, Or keeps the end from being hard. Better to go down dignified With boughten friendship at your side Than none at all. Provide, provide! If you agree that Frost here “gets away with” flouting the show-don’t-tell injunction, you may see that he does so exactly because, in that opening mini-narrative, he has shown. Narrative is his means to that end of rhetoric, of argument, persuasion. You may not buy the argument, but you will hear it out, because you know where that argument is coming from. Had that opening mini-story not existed, you’d have been tempted to dismiss everything that follows as mere opinionation, unassociated with any recognizable reality. Narrative is the very grounds on which the poet bases his conclusions, ambiguous as they remain, as always in Frost. Once again, that is, I think this poem passes the who-what-where-why etc. test. And it shows that a writer needn’t overdo the narrative values of his or her lyric. We know just enough about the speaker’s identity in “Provide, Provide” to feel invited in: it’s enough for us to understand that he is one who has seen a formerly beautiful and idolized woman reduced by circumstance to meniality. We know the “where” aspect too: he was on hand to make the observation; we need no more location. In very brief compass, that is, Frost earns his authority to comment. As I say, not everyone may aspire to rhetorical poetry, to testimony, and even I have a satchel full of Frost poems that I prefer to this one, great as it is in my opinion. Maybe my interest in narrative as a means to argument has simply to do with my being contrarian by nature: if I’m told I can’t do it, I want to try. Or, related, that interest may come from the sort of Frostian impulse he describes in “Two Tramps in Mudtime”; it is no moral failing if you don’t share that impulse. As I’ve said right along, I can only speak from my experience as writer and reader. It is not the “right” experience– unless you want it to be. To quote that great liar Donald Rumsfeld, It is what it is. And here is what it is: My object in living is to unite My avocation and my vocation As my two eyes make one in sight. Only where love and need are one, And the work is play for mortal stakes, Is the deed ever really done For Heaven and the future’s sakes.
Key Takeaways from Boston Fed President Eric Rosengren’s May 19 Remarks - Takeaway: The absolutely necessary response to the COVID-19 pandemic – social distancing, aimed at protecting lives – comes at a high economic cost. The recent employment report underscores the unprecedented speed and ferocity with which jobs have been affected by this public health crisis. Excerpt: “I expect that the unemployment rate will likely peak at close to 20 percent. Unfortunately, even by the end of the year, I expect the unemployment rate to remain at double-digit levels. This outlook is both sobering and a call to action. Now is the time for both monetary and fiscal policy to act boldly to minimize the economic pain from the pandemic.” - Takeaway: It is important that our progress to date not be undone, and simply allowing business to reopen is not a panacea. Our economic challenges are rooted in public health concerns. If consumers are not comfortable visiting restaurants and shops, relaxing restrictions may do little to bring back business and jobs. Excerpt: “Public health solutions are paramount – without them, it will be virtually impossible to return to full employment. It is vital that the design and timing of reductions in business restrictions not result in worse health outcomes and higher unemployment over a longer period of time.” - Takeaway: The Fed has taken strong actions to mitigate the economic consequences of the pandemic. We want to limit the potential for medium- and longer-term “scarring” from the crisis. This means, among other things, working to minimize the length of unemployment spells and ensuring solvent firms have necessary liquidity. Excerpt: “Preventing bouts of financial instability from having significant spillovers to the flow of credit to consumers and businesses is a vital crisis role for central banks, and the Fed has aggressively played that role during this very challenging period.” - Takeaway: It is important to note that the powers granted to the Federal Reserve for emergency actions involve “lending, not spending.” All of the Fed’s programs involve loans, to be repaid – they are not grants by the Fed. Excerpt: “Lending can play a crucial role in a crisis and in bridging to more normal conditions.” - Takeaway: The Main Street Lending Program, expected to open in the coming weeks at the Boston Fed, is one of the Federal Reserve’s more innovative and important programs. It is designed to help credit flow to small- and medium-sized businesses that were in good financial condition prior to the crisis, but now need loans that can help them until they have recovered from, or adapted to, the impact of the pandemic. Excerpt: “This is an important program, and we’ve worked very hard to get it right. We listened carefully to initial feedback and expanded the program in a number of ways to serve a wider range of borrowers. It will not be able to assist everyone, but we expect that it will provide an important bridge for many businesses that employ much of the American workforce.” - Takeaway: How we respond to this public health crisis will greatly influence whether many of these temporary job losses become permanent. As Fed policymakers, we will continue to vigilantly pursue ways to help the economy return to full employment. Excerpt: “The economic shock is an unprecedented challenge for economic policymakers including the Federal Reserve, where we will do whatever we can to support a return to full employment and stable prices – our mandate from Congress – and our commitment to financial stability, which is important to the well-being of all Americans.”
The teaching is organized as follows: See the unit page INF VR - 1° anno 2° sem INF VR - 1° anno 2° sem INF VR - 1° anno 2° sem The course introduces the student to the basic concepts of major diseases and the fundamental pathogenetic processes, allowing them to correlate cellular homeostatic mechanisms to alterations of the organ functions and clinical manifestations of disease. Students will understand the basic principles of pharmacology, especially in pharmacokinetic and pharmacodynamic mechanisms, and the benefit and risk profile of the drugs. The course will helps students to develop an approach oriented to prevention, assessment and management of respiratory, urinary and intestinal function alteration and the multidimensional assessment of pain. PHARMACOLOGY: The course aims to provide general knowledge about the functions and the use of drug therapy, in the fields of pharmacodynamics, pharmacokinetics, pharmacoepidemiology and pharmacovigilance. Concepts of pharmacology are essential to the development of the nurse's own skills in management and control of drug therapies, as well in patient education. This course is preparatory to the study of the individual classes of drugs, which starts with anti-inflammatory and antibacterial drugs and will be completed in the clinical pharmacology module. SEMEIOTICS AND PHYSIOPATHOLOGY: The course introduces the student to the fundamental concepts of pathophysiology and semiotics, implicated in the activation of organ damage and in the development of the main diseases. The information necessary for understanding the set of events leading to the disease will be selected using a practical approach. Any further details will be reserved for the lessons that specifically deal with the organ and the apparatus. Every fundamental alteration will be explored within a specific organ and the different peculiar manifestations of the organ itself will be clarified. The lessons will also include the semeiotic part, describing the objective signs that characterize diseases. The treatment of semeiotics will be performed after treating the pathophysiology of the various organs. At the end of the course, students must be able to know the general mechanisms that lead to the disease and to identify their characteristics. CLINICAL NURSING I and II: The course offers the contents and the assistance methodology related to person's needs and to "fundamental" nursing care. The course will helps students to develop an approach oriented to prevention, assessment and management of respiratory, urinary and intestinal function alteration and the multidimensional assessment of pain. During teaching, frontal teaching methods will be privileged associated with sessions of application of the contents treated in care situations (guided exercises, video projections, clinical case analysis, testimony readings will be used to analyze and reflect on the perceptions and needs of the patients and family members). GENERAL PATHOLOGY: The course develops the preparatory contents for understanding the basic mechanisms of diseases. The following contents will be covered: the general principles and disorders of homeodynamics of complex systems, the main pathogenic esogenous and endogenous factors, biological damage, responses to damage, with particular reference to inflammation, the processes of healing and chronicity, immunity , haemostasis and thrombosis, vascular pathology and notions of general oncology. The course aims to help the student to understand the essential links between clinical analysis and fundamental mechanisms, facilitating the reading of the conceptual basis of the disease. Urinary phaseout: the signs and symptoms more frequent in urination (polyuria, oliguria, dysuria, ...) and major alterations (urinary tract infections, incontinence and urinary retention) a) assessment of urinary function and diagnostic procedures b) urinary tract infection (UTI) specific data verification presence / risk and care interventions c) chronic and acute urinary retention: care interventions and acute management protocol d) care management of people with bladder catheter: placement, care, removal and prevention of urinary tract infections associated with it. The collection of a sterile urine sample. Principles of a good sleep habit, physiologic effects of spleeping, the interventions to manage the person with sleep disorders (legs without resting, apnoeas night, insomnia in the elderly, the factors that hinder the sleep in the hospital (noise in environments of care). general assessment and definition of the main alterations (hypoxia, cyanosis, dyspnea, cough, hemoptysis, bronchial obstruction, sputum and pathological breaths) a) assessment of breathing b) The management of oxygen therapy Clinical nursing 1 learning is assessed during the written examination of "Physiopathology applied to nursing". The test includes 65 multiple-choice questions (1 correct answer out of 4-5 ). 20 questions concerning the module of Clinical Nursing, and 15 are related to each of the 3 modules General Pharmacology, Semeiotics and Physiopathology and General Pathology . The exam is passed when the student answers correctly to at least 11 questions related to Clinical Nursing and 8 concerning each of the other 3 modules”.
It costs a lot these days to put food on the table. It costs a lot more if you don’t protect and preserve that food from moisture. The Consumer Price Index (CPI), a measure of economy-wide inflation, increased by 0.9 percent from January to February 2022. Since 2021 the index has risen 7.9 percent! In 2022, food-at-home prices are predicted to increase between 3.0 and 4.0 percent! So, it’s important to keep the things you do buy away from the damaging effects of excess moisture. How does excess moisture make your food spoil faster? Read below to discover how humidity can affect the shelf life and quality of your stored food and what you can do about it. - Excess moisture content in foods provides ideal conditions for the growth of bacteria, yeast, and mold. - Powdered foods like eggs, milk, juice drinks, protein shakes, sugar flour, and spices can clump when the moisture content in the ambient air is high. - Snack foods like chips, crackers, and cookies can become stale due to exposure to excess moisture. - Keeping pet food away from temperature fluctuations and moisture will keep it fresh. - Improper food storage is a leading cause of spoilage. If your house is warm and humid, fruits and veggies left in the open will spoil quickly. - Moisture can also lead to the breakdown of some packaging materials (paper degradation and metal rusting). - The recommended storage temperature for the best quality and most extended shelf life of MREs (Meals Ready to Eat) is between 50° F (or lower) with no relative humidity. - Moisture will harm dry and liquid-packed canned goods and should be kept to a minimum. - Why do bottles of wine need certain humidity levels? Simple. The corks. The recommended humidity level is 70 percent. - Grains, flour, cereals, pasta, herbs and spices, sugar, salt, coffee, tea, nuts, and meat should also be stored in dry, low humidity locations. The 6 Golden Rules of Dry Food Storage - Keep food items at least 6 inches off the floor. - To allow for ventilation, shelves should be placed some distance from the walls. - The humidity in your dry storage should be no higher than between 50-55% to maintain food quality for as long as possible. - Keep your storage area clean to avoid attracting pests. - As much as you possibly can, store your food in its original packaging. - Food from open packages should be stored in air-tight containers with clear labels that identify what the food item is, indicate when it was transferred from its original container, and any relevant dates.
Points, Lines, Planes, Angles, Circles, Triangles, Quadrilaterals, Pythagorean Theorem, Conic Sections, Proofs, and More Geometry has three main areas of emphasis: the vocabulary of geometry, practical applications of geometry, and traditional geometry, including proofs. Topics include lines, angles, area, perimeter, volume, Pythagorean theorem, axioms and postulates, congruency, and similarity. An introduction to trigonometric functions is included to prepare students for testing they may do before taking a trigonometry course. The Geometry Instruction Pack contains the instruction manual with lesson-by-lesson instructions and detailed solutions, and the DVD with lesson-by-lesson video instruction. See Student Kit for course content and description.
The 5 Basic Massage Techniques Posted on 25th February 2022 at 11:54 Most massage strokes are designed to promote relaxation and relieve muscle tension, but there are differences between the techniques. In this blog, we’ll explain 5 of the most common massage techniques and the ways in which they differ from one another. Effleurage is the most-commonly used massage technique. It involves the massage therapist using their hands and/or forearms to provide long, gliding strokes across the area of the body that they’re working on. Since this is a relatively gentle technique, it’s commonly used at the beginning of a massage when applying lotion or oil. Effleurage also allows the client to get used to your touch and lets the body ease into the massage. As the muscles begin to relax, the pressure level applied will intensify. Petrissage refers to the kneading, rolling, and wringing of the tissues. The massage therapist applies this massage technique using their hands or thumbs. Petrissage is great for freeing up muscle knots, which is why it’s often used during deep tissue massages. It also increases blood flow and lymphatic drainage for the area of the body that’s being worked on. Friction is also a very common massage stroke. It’s mostly used to loosen muscle knots. Massages that utilise friction as a technique use quick motions along part of the body, in order to generate warmth, which helps the muscles to relax and unwind. Tapotement involves the massage therapist gently and rhythmically tapping on the body using their fists, side of their hands or cupped hands. As a massage technique, it’s used to stimulate muscles nerves and promote circulation. Tapotement is therefore a popular massage stroke for massages before events. Vibration involves the massage therapist using their fingertips or the heel of their hand to perform a back-and-forth motion over the skin. It can be done quickly or slowly, depending on personal preference or the goal of the massage. This massage technique helps to loosen up muscles in particularly tense areas.
Unlike humans, horses' teeth continually erupt throughout life and this can lead to abnormalities of wear and eruption that are very different from the problems seen in human dentistry. Horses are also susceptible to dental abscesses and these can often be prevented by early detection of disease such as infundibular caries and periodontal pocketing. A thorough dental examination is just as important as routine rasping. For some horses dental checks will be required annually, however many horses will require 6 monthly checks, especially those that are young, old, competing regularly, or those that have significant dental abnormalities such as missing teeth. We find that the quality of work that can be carried out is often limitted by the horse's tolerance to dental work. Using sedation and modern, battery-powered dental instruments, we can now correct abnormalities more effectively than before with minimal stress to the horse. At Stable Close Equine Practice this is our preferred method of performing routine dentistry. How about using an equine dental technician? There are some excellent equine dental technicians (EDTs) in Hampshire and we regularly work with several of them. We are happy to work with all EDTs who are members of BAEDT (listed here). If you want to understand what procedures can be performed by EDTs, you can download a useful document here. If you want any advice about choosing or using an EDT, you are welcome to ring us for advice.
A part of Alex Gregor’s childhood was spent growing up in Buncombe County, near Asheville, where he and his family enjoyed canoeing and hiking. “I think that’s probably the origin of my environmental consciousness …those experiences with family and friends, outdoors,” he recalled recently. After college, Gregor held several jobs before deciding to pursue a medical degree. One particular job was in the “social enterprise sector with a focus on global development issues.” He said his passion for the outdoors and his experience working on global issues carried from that career to his new one. “Seeing the intersection of environmental challenges and human health, from that perspective, was a big part of what motivated me to go into medicine,” he said. “Specifically, to get involved in this movement of planetary health.” Now Gregor is a fourth-year medical student at UNC Chapel Hill School of Medicine. But he noticed something missing from his medical training. “What I saw in school was that we talked a lot about health, but not really about some of the big environmental elephants in the room, like climate change, and air pollution or other forms of pollution that really have a huge effect on health,” he said. Public health and economic crisis Researchers say that extreme weather events not only take a physical toll on the environment but also are responsible for causing a host of traumatic responses in people who experience the devastation, such as post-traumatic stress disorder, depression and suicide, among others. A 2022 report published by the American Psychiatric Association, found that “67 percent of Americans agree that climate change is already impacting the population’s health.” While “55 percent of Americans are anxious about the impact of climate on their own mental health.” What is more, in 2010, mental illness taxed the global economy by “at least $2.5 trillion in direct and indirect costs, including lost productivity and economic growth,” according to a briefing paper from The Lancet Global Health, published November 2020. The paper projects that by 2030, costs associated with mental illness will increase to $6 trillion. Addressing the ‘elephants in the room’ In March 2020, Gregor and a group of his medical school colleagues decided it was time to act. They formed Climate Leadership & Action Network at the UNC School of Medicine (CLEAN UNC). According to their website, the group has three primary goals: getting medical professionals up to speed on climate topics, working within the health system to reduce waste and greenhouse gasses to “do no harm” to the environment and getting the health care community involved in formulating policy solutions. Kenan Penaskovic, associate vice-chair of clinical affairs and director of inpatient psychiatry services, was approached by CLEAN members, who had ideas about how to integrate the topic of climate and its impact on public health into a two-week elective course Penaskovic teaches titled Health and Human Behavior, he said. “Over 200 medical and academic journals within the last year [are]simultaneously saying that the number one global public health threat is climate change,” said Penaskovic, who also said that more recently he was trying to incorporate the content into his formal teaching. “It is an acknowledgement of the fact that we’re all impacted by this and we’re all concerned.” In a text message, Gregor said that since its founding in 2020, “more than 150 medical students and other graduate [and] undergraduate students have participated in CLEAN sponsored events (i.e. virtual lectures and discussions).” Currently, there are 778 students enrolled in the medical school, according to the registrar. Gregor also said in the text that since the 2020-2021 academic year, “all first and second year medical students (M1s-M2s) have been taught about climate change impacts on public health in the foundation core curriculum, i.e. clinical science (including cardiovascular, pulmonary, renal and …psychiatry blocks) and social and health system courses.” There are roughly 190 students per class. One goal listed on CLEAN’s website focuses on “helping the health system” reduce its carbon footprint by identifying areas where reusable items can reduce waste, for instance. In order to facilitate change at the institutional level, however, students and leaders at the school must work together. Assistant Professor Yee Lam teaches primary care at the medical school, is CLEAN’s faculty adviser and has acted as a liaison between the group and medical school leadership. In addition to advocating for elective courses that address climate change, CLEAN offers an environmental impact evaluation. “There is this planetary health report card that comes out and kind of gives an assessment of where your institution is at the moment on a variety of factors,” Lam said. One of the issues CLEAN is exploring with the administration is whether sustainable practices can be enhanced in the clinical setting by partnering with vendors that use less of the “superfluous packaging” that comes along with the many medical supplies used daily in health care settings. There are five medical schools in North Carolina, but it’s not clear whether any of the others offer any coursework on the impacts of climate change. A spokeswoman from East Carolina University’s Brody School of Medicine said the school currently doesn’t offer any coursework on the impacts of climate change on public health. Campbell University School of Osteopathic Medicine, Wake Forest University School of Medicine and Duke University School of Medicine did not respond to requests for comment. A national movement The idea of addressing the impacts of climate change on public health in medical school curricula appears to be spreading across the country. Last month, Lisa Doggett, co-founder and president of the board of directors of Texas Physicians for Social Responsibility (PSR), announced in a press release that three Texas Medical schools – Dell Medical School at the University of Texas at Austin, Baylor College of Medicine in Houston and University of Texas Southwestern in Dallas – are offering an elective course on “environmental threats, including climate change.” “The elective courses were developed by Texas PSR, a nonprofit organization and a chapter of National PSR dedicated to addressing the gravest threats to human health, including climate change,” Doggett said in an email. Doggett said what motivated her to collaborate with her colleagues to develop the course was inspired by the fact that when she attended medical school in the mid-1990s, environmental health training was not offered. “I worked in community clinics, providing patient care, but I realized my ability to help my patients was limited in many ways,” she said. “We’ve learned that most of what determines someone’s health status comes from their environment and the conditions in which they live, not what a doctor can do for them in a clinic.” When asked why it is important for medical students to take courses on the impacts of climate on public health, Doggett emphasized the role of medical doctors in educating patients. “Physicians are well-positioned to help patients connect the dots between climate change and their own health and personal choices,” said Doggett. “We are also respected community leaders who can be impactful advocates for change at the policy level and with decision-makers and elected officials.”
Pets are gifts of gods to humanity. But not everyone can treat them equally. A weekly brushing session may be sufficient for some of us. Many people believe that grooming their pets is optional. That isn’t the case, though. Grooming your pets and keeping them clean and healthy is an absolute must. It’s good for everyone in the house, especially your pet. With proper grooming, mats, ticks, fleas, shedding, and other serious health conditions that you may have been unaware of can all be avoided. So, don’t worry; we’ll show you how important and beneficial regular Mobile pet groomer Pembroke Pines can be. Why is grooming pets so important? When it comes to the most common pets, dogs and cats, following daily grooming rituals help the animal develop accustomed to being stroked. Brushing your pet’s hair every evening might help both you and the animal relax. Regularly inspecting an animal’s eyes, teeth, and ears, for example, can help you prevent costly medical expenditures. Many pet breeds are susceptible to ailments that can be detected early by simply watching your pet. The last advantage is all about appearances. When animals are groomed, they not only feel better, but they also look better. Maintain your pet’s health by attending to its demands regularly. Reasons to groom your pet. Mobile pet groomer Pembroke Pines your pet is always beneficial to both you and your beloved companion. The following are some of the reasons why grooming is necessary: - Brushing your pet regularly removes filth, dandruff, and dead hairs, as well as reducing the number of hairballs they ingest in kittens and cats. It also keeps tangles and matting at bay, which can cause pain and illness. - Brushing a pet’s teeth is another routine that should be followed regularly. The dental health of a pet can have a significant impact on its overall health. - Ears can be a problem for a variety of breeds that are more prone to diseases and parasites. They should be odorless and clean. Anything that appears red, bloated, or has an unpleasant odor can signify impending doom. - Pets’ nails can be incredibly sharp, causing serious injury if they aren’t cut regularly. Trimming your nails regularly helps lower your chance of damage, keep them from getting stuck in carpet or other furniture, and prevent in-growing nails. In a nutshell, keeping your pet healthy and fit should be your priority if you have one. A well-groomed pet elevates your social standing. But it comes down to how much you care for your pet; if you love it enough, you’ll want to maintain it healthy and strong.
Completely straight tooth is sweet luck of few and outcome for these individuals who have already gone via chew correction. In Minsk, yearly the variety of these wishing to put braces or aligners . The issue of crooked tooth has emerged sharply in current a long time because of the predominance of sentimental meals within the trendy eating regimen, which has led to a pure lower within the jaw of individuals because the center of the twentieth century. In flip, the variety of tooth laid down by nature merely doesn’t have sufficient area. Most frequently this results in crowding of the entrance decrease tooth, however, sadly, extra severe types of malocclusion are sometimes shaped. What’s a malocclusion? - that is an irregular closing of the dentition in a relaxed state. There are a number of forms of malocclusion: - Open chew - Excessive canines The unhappy penalties of a malocclusion: Malocclusion has a number of harmful penalties. - Speech defects. - Elevated stress on the abdomen and intestines. - Accelerated abrasion of enamel and gum issues. - Temporomandibular joint (TMJ) issues. Malocclusion can result in disordered speech, together with lisps and the shortcoming to correctly produce strident sounds (sounds made due to quick airflow in opposition to your tooth, similar to F, V, Z and Ch Elevated load on the gastrointestinal tract - Meals is poorly chewed. - When chewing, tooth are loaded erratically. - Poorly chopped meals enters the abdomen. - Meals takes longer to digest. - Over time, this results in gastritis, ulcers and different illnesses of the abdomen and intestines. Tooth enamel erosion and gum issues - Uneven loading on one facet results in accelerated abrasion of tooth enamel. - Tooth with thinned enamel are extra simply affected by caries. - Within the space of inadequate load, the method of bone tissue lower (reducing of the gums) begins. - Subsidence of the gums exposes the neck and roots of tooth that lack adequate enamel. - There may be an elevated sensitivity to chilly, scorching, candy. - Caries shortly impacts the elements of the tooth that aren’t protected by enamel. - Naturally, such issues provoke a attainable tooth extraction sooner or later. Adjustments within the temporomandibular joint - Crucial penalties of incorrect loading on the tooth are modifications within the temporomandibular joint. - First, you’ll begin to discover clicks and crunches in your jaw. - After that, there may be reasonable ache within the ears (in reality, it hurts the joint). - With a pointy and huge opening of the mouth because of irritation of the joint, in some unspecified time in the future you will be unable to shut your mouth. - Properly, probably the most disagreeable factor is complications as a consequence of pinched trigeminal nerve. It is a ache that’s tough to localize and you’ll not even perceive why your head hurts. The explanations for the dangerous chew We distinguish 4 major teams of causes resulting in the event of a nasty chew: - Kids’s habits. - Problems throughout being pregnant. - Premature therapy of major tooth The primary group of causes is dangerous habits: - sucking fingers, pens, long-term use of the nipple; - placing palms beneath the top. The second group of causes is a violation of the event of the fetus throughout being pregnant: - taking sturdy antibiotics; - some illnesses of the mom within the first trimester of being pregnant. The third group of causes is a genetic issue: - some forms of chew are very more likely to be inherited The fourth group of causes is poor-quality therapy of milk tooth When you discover any modifications in your little one, then go to the orthodontist as quickly as attainable. And it’s higher to do that from the age of 6, each half a 12 months. It’s the orthodontist who corrects the chew in youngsters. Within the early levels, as a rule, severe correction is just not required and the rising little one’s physique simply tolerates it. For an grownup, such procedures are an order of magnitude more durable, each bodily and mentally. To prescribe an efficient therapy, the affected person might want to go to the orthodontist a minimum of twice for a preliminary session. That is essential for a whole analysis of malocclusion. On the first session, after examination, it’s merely unimaginable to make a dependable analysis. An skilled physician will perform three forms of diagnostics: - dental casts; - picture of the affected person’s face and tooth; - X-ray examination. The casts are wanted for the manufacturing of a plaster mannequin, on which the physician will calculate the free area and lack of area, symmetry and asymmetry. Photograph of face and tooth. They may assist decide the asymmetry of a smile and a face. File the preliminary state and the outcome. Take note of non-obvious aesthetic nuances that must be taken under consideration when correcting chew. X-ray examination. X-rays can be utilized to diagnose the relative place of the roots, the inclination of the tooth and the presence of hidden tooth, the situation of the jaws. Based mostly on these three research and an exterior examination, the physician will draw up an correct therapy plan. You select the suitable method from a number of really helpful by the orthodontist on the second session, based mostly in your private preferences and way of life. Correction of chew. Therapy strategies. There are 3 ways to right a chew: - Orthodontic Therapy with Clear Aligners; - Dental Braces Therapy; - Maxillofacial surgical procedure. Clear Aligners Therapy - Clear Aligners - Orthodontic gadgets, like plastic mouth guards, however with a slight shift in direction of the dentition, to steadily right the occlusion of the tooth with every new mouth guard, which the affected person modifications each month. You will want to put on clear aligners every day and ultimately change from three to a number of dozen, relying in your scientific case. Aligners are a really handy choice for correcting small defects within the chew. Now there are a number of forms of caps available on the market. They’re related in look and precept of operation, however differ solely within the high quality of manufacturing. We’ve got described your entire therapy process with clear aligners intimately -Here-. Options of therapy with clear aligners The common length of therapy with mouth guards is one and a half. At the price of therapy, the quantity ranges from 800 USD and extra, relying on the complexity of the therapy. Clear aligners have some advantages - snug to put on; - brushing your tooth on daily basis is simple; - you can’t eat in them (you’ll be able to drink water); - polycarbonate mouthguards break simply; - require lots of self-discipline. If you don’t put on a mouthguard for some time, the subsequent aligner will merely not set up and you’ll have to order a brand new set. - Good clear aligner: - holds tight; - doesn’t create ache; - You are feeling snug in it. At your third appointment, your physician will set your first mouth guard. Now it’s essential to go to an orthodontist to obtain a brand new set of mouth guards, in addition to for an examination. On some tooth, clear locks are moreover positioned, that are eliminated with out a hint after therapy. Dental Braces Therapy Nonetheless, in 80 p.c of instances, they resort to correcting the occlusion with braces. mouthguards can solely get rid of minor defects. The mechanism of motion of braces has been studied very effectively for a very long time and is being improved from 12 months to 12 months. Options of therapy with braces On common, the length of therapy with braces is one and a half years. The price of therapy is from 400 USD. There are three major variations between braces and clear aligners: - braces are cheaper than aligners; - can repair nearly any chew downside; - they’re seen on the tooth. The primary forms of braces There are three forms of braces: Based on many orthodontists, the very best braces are steel: - they’re the most cost effective; - but additionally the best. The second hottest sort of braces is ceramic (sapphire): - is just not as noticeable as are made clear or to match the colour of the affected person’s tooth; - are dearer than steel ones; - could break. Lingual braces are a lot much less frequent because of the difficulties with care. - Lingual braces - these are braces within the type of steel plates which can be hooked up to the within of the tooth, which is why they’re additionally referred to as “invisible”. - injure the tongue; - diction is badly distorted at first; - tough oral hygiene; - not each orthodontist will conform to work with them; Putting in and sporting braces In addition to within the case of clear aligners – the set up of the bracket system happens on the third appointment. The process takes one and a half to 2 hours and is totally painless. The sensation of tightness on the tooth will final 3 to five days. Whereas sporting braces, you’ll steadily observe small modifications within the contours of your face, in which there’s nothing to fret about. You want to perceive that within the technique of correcting the chew, your tooth fall into their rightful place, which for some purpose they may not soak up a pure means. Additionally, typically it will likely be unimaginable to chew usually and there will probably be momentary speech defects, however this can shortly go away. It’s now essential to go to a physician as soon as a month. In the course of the appointment, you can be corrected and braces will probably be tightened. On the finish of the therapy, the braces are eliminated and the retention gadgets are positioned, which we’ve already talked about above. If the affected person follows all of the suggestions and the physician makes use of high-quality braces, then you’ll one hundred pc obtain the specified outcome. You possibly can learn a full description of the therapy course of and about all forms of braces -Here- Chunk correction with maxillofacial surgical procedure The third sort of therapy is surgical correction of the chew along with the set up of a bracket system, the place the orthodontist works in shut cooperation with the maxillofacial surgeon. The variety of such instances is small, however however they do happen. The actual fact of surgical intervention is undesirable, as a result of ensures a protracted rehabilitation interval. However typically it’s merely unimaginable to supply a very good outcome with out a surgeon. What needs to be carried out earlier than chew correction? Earlier than correcting the chew, in any case, you need to: - Fully heal all tooth. - Make knowledgeable oral hygiene; After chew correction On the finish of the chew correction, you will want to make use of fastened retainers and clear retainers for about six months. - Mounted retainers - is a non-removable steel thread on the within of the tooth, which serves to anchor the results of orthodontic therapy whereas the bone tissue fills the ensuing voids. - Clear retainers - Or “night time retainers” differs from the same old ones in that there is no such thing as a longer any offset in relation to the dentition. She will even must be worn each night time for six months.
It is possible to download and print all coloring pages absolutely at no cost! Coloring pages can be readily downloaded and printed from the net, free of charge, as many times as you desire. The toddler you're going to be eager to produce new coloring pages. According to Wikipedia: A coloring book (or colouring book) is a type of book containing line art for a reader to add color using crayons, colored pencils, marker pens, paint or other artistic media. Coloring books are generally used by children, though coloring books for adults are also available. They cover a wide range of subjects, from simple stories for children to intricate or abstract designs for adults. The golden age of coloring books is considered to be the 1960s. Beyond the skill of holding a crayon or pencil properly, learning to keep the lines drawn on the coloring pages is another example of fine motor skills. This is a more advanced concept, and may well take several years for the child to fully be able to accomplish, so only positive comments should be made on your youngsters coloring attempts in this regard. It is better to have them enjoy coloring and want to do it often then become discouraged by negative feedback and harsh criticisms. Christian parents can easily find many free Bible coloring pages online. Even if your family is not religious it is important for children to understand religious concept, icons, and events from the Bible. This is an issue cultural education, not just a moral foundation. In a broader perspective these images can be used as a starting point for conversation on moral topics in general. Of course coloring pages depicting Jewish, Hindu, Buddhist and even Humanist principles are freely available online. Beyond these simplistic and often jingoistic sources, a foundation for a broader moral education can be found if parents use a little creativity in their search. Many state government departments offer free couloring sheets promoting good citizenship. For example the State Department of Environmental Protection might offer activity pages promoting keeping the environment clean. Other important civic and moral lessons that can be taught via coloring are sharing, loyalty and self-discipline. Love is the only thing most of us have at the same time or another. Holy love could possibly be necessary. Trust me after you start you are going to be in love with the game.
My name is Mike Gregory, I am known for my love of burgers, 80s hair styles and my constant harping on about simple code (K.I.S.S. anyone?). This week as part of our ongoing effort to improve our knowledge in Systems I gave a presentation about the Hibernate framework. What is Hibernate? Hibernate is an object-relational mapping tool for the Java programming language. Essentially it provides a front-end for your database that is object-orientated rather than relational. This way you can work with Java objects, but you can make them persistent by storing them in, or loading them from your database. • • • Why use it? When writing code in an object-orientated language such as Java, it is often useful to store your data in a database. Popular relational databases such as MySQL and PostgreSQL offer a robust and powerful way to store and interact with data, however there can be a mismatch between the relational way of doing things and object-orientated paradigms that can add unwanted complexity to applications. ORM (Object-Relational Mapping) is a way to provide an object orientated front-end for the database, with the aim of simplifying your data model. The idea is that you can think purely in terms of objects, and come up with more elegant solutions to problems rather than having to juggle responsibilities between an object-orientated and relational way of thinking. • • • Why are Colonel Duck interested in it? - Anything that simplifies our solutions is good in my book. When using relational databases in Java you have two separate ways of organising your data, the persistent relational layer, and the malleable Java object layer. If we can think of solutions purely in terms of objects, we can write elegant solutions with simple architectures which will mean less bugs! - Prettier code. I’m sure anyone who reads code all day will agree, pretty code is easier on the eyes. Hibernate can allow us to remove lots of JDBC boilerplate code as well as largely automating CRUD (Create, read, update, delete) operations. - Again, less bugs! Having less or no SQL code in our Java methods. SQL code is a common source of application errors due to the Java compiler and IDEs not being able to automatically warn you about errors as they can with Java code. - Lower-coupling. Hibernate is database independent, so if we need to change our database software we can easily. • • • There are many other reported benefits, and TutorialsPoint has a good guide on getting started here. I am always interested in anything that complements the KISS principle (Keep It Simple Stupid), as it’s very easy for code to get complicated even without the added concern of how to link your Java objects to your relational database. This is just one stepping stone on my quest to create simple, robust code and eat delicious burgers!
What you fix your attention on accumulates. What you apply pressure to either strengthens or weakens. In this way good thoughts become great ideas and bad thoughts become potentially destructive. What you fix your attention on will grow in strength. The freedom to think, lead to the freedom to speak, the freedom to speak lead to the freedom to live. Good choices come from good thoughts. Articulation of good thoughts lead to the accumulation of freedom. Thinking leads to goals, to action, the expression of passion. Thinking leads to books, to music, to buildings, to invention. Thinking leads to love. Thinking also leads to power both good and bad. Thinking leads to theft, oppression and abuse. Thinking can lead to sin, to hate, to violence, to abandonment – to separation. If thinking about an idea serves to get an idea started, repetition of a thought can lead to initiative. Maturing a thought from a flash of inspiration to a well articulated thought creates momentum and purpose. Purpose with the expectation that the thought can be realized has been the beginning of many good things. Thoughts affixed to paper have a different resonance than thoughts that are spoken. If I speak a thought – does it carry more emphasis? How does it affect my thought’s value? Does it increase its power? When sharing thoughts – repetition can be affirming or detracting to relationships; in business, in social settings, or in love and family. How many times have you had a good idea only to lose it? Perhaps a good idea starts but you can’t seem to manifest it into reality? It’s true for most all who think to begin with – and I’m sure all of us think. Most potentially good thoughts are not nurtured at all, and therefore suffer loss. Even though those thoughts with which we occasionally dwell usually return good and often surprise us. You may have said to yourself, “…you know I was thinking about that”. If fixed attention strengthens or weakens, distraction must have the opposite affect on thoughts. If one is accumulating momentum of thought towards a positive outcome – thought transforming into power – interruption or distraction can take away from that power. If distraction is desired to escape thought – then this is only valuable if thoughts are destructive. Confronting negative thoughts with transactions of positive ones is a better means of coping than escape. Escape more often leads to self destruction and the defeat of noble ideas and liberty. Whom do I love and in what way can my thoughts honor them or hurt them? These thoughts are essential to control of good over bad. Thoughts can be given away or sold. Thoughts lead to conversation. Conversation to transactions. My job is to think, to organize those thoughts, to share those thoughts. What I think about – who I think about and what I am supposed to think about is relative to my job. Thoughts become ideas, to which I share. Other peoples thoughts I am meant to receive, assign value and accept or reject. To those I serve, it’s important to value their ideas well. Balancing whom I serve and how my thoughts support them keeps my thoughts focused on how best to serve them. Returning to thoughts and continuing the work they begin is the foundation of power. When distracted, its essential to return the focus to those thoughts until the value of the thought is realized. The will to do something comes from persistent thought. A powerfully constructed thought articulated to others, leads to powerful relations in business and in life. Thoughts grow their maker in strength, character and in power. If you are to do something well, fix your attention upon that thought. Pick a thought, repeat it and nurture it. Focus and protect it. Avoid distraction until the thought is articulated. Then return to that thought and it’s articulation. Share it, nurture it, and action will begin. Remember that thinking makes it so. What you fix your attention on will accumulate.
Transcript: Example: A child is able to visually identify this as "sand". At the sensory table, the child can now also feel the sand to identify it. Kids are either playing at the sensory table with each other (communicating back and forth) or next to one other. Social/Emotional Materials Set a table outside for the flowers/vases to sit on. This way they'll get plenty of sunlight. (Has to be outside rather than by a window because the windows are too high up for the kids to examine the flowers properly) Sensory: Science: Children will see overtime that the color of dye that they added to their water will appear on the flower petals. Language/Communication plastic vases white flowers color dye time sunny area The sensory bucket has to be on a table that is accessible from all sides. (This way kids can play without the worry of crowding.) (depends on each science activity) Fine motor: Children use small muscles in their fingers to grip the shovels (or other utensils) Children use small muscles in their hands to feel different substances (sand, bubbles, shaving cream, etc.) 3 Children per vase/flower This will not only cut back on how many vases we need to buy, but the kids can work together in groups Physical Learning Outcome/Skills for Each Domain (Example Lab: Flower Color Absorption) Children can match the name of the material to the feeling of the material. Receptive: Understanding what their teacher is saying. (The teacher will be describing how the flower petals change color) Expressive: Children will get excited and express to their friends/parent/etc. how much they like the experiment = Cognitive + Language/Communication Sensory: Science: Arrangement Fine Motor: Children grasp the flower to put in the vase. Children squeeze the food dye into the water Physical Social/Emotional By: Michelle, Mona, and Brianna Cognitive Sensory: Science: For example: Child 1 asks asks Child 2 to help him scoop the sand. Child 2 understands what is being asked, and responds accordingly. Sensory and Science Centers Children can communicate back and forth, giving each of the kids opportunity to: -Express what they want to say (Expressive) -Understand what their friend is saying (Receptive) sensory table Squishy Baff sand bath bubbles shaving cream shovels cups food dye Transcript: Sensory Presented by MS.BONNIE & MS.NAGHELY Infants Infants SUBTOPIC 1 5 senses Taste, Sound, Touch, Smell, Sight How? How? Smell: scent bottles for babies , make sure to use safe bottles or containers that will not harm babies : filled with basil , lemon, lavander ( natural scents) Touch: rice jumping ( baby in bouncer with rice on their feet) Taste/touch : yogurt painting with food coloring and plain yogurt. sight : Balloon kicking ( baby laying down with a helium balloon tied to foot and they watch as they are able to control it sound: Shakey blocks ( little cube containers sealed with beans) PICTURES PICTURES Toddlers Toddlers keeping in mind that sensory is not limited to a bin. Taste: cornstarch and yogurt silly putty. Sound: instruments (bells drums Touch: Pretend snow (2 cups baking soda, 1/2 cup water, and glitter in a small bowl) or Real snow when available Smell:Cinnamon scented Rice sight: sensory bottles ( heavy liquids vs light flowing items) 5 senses rule 5 senses rule Pictures Pictures Preschool/ Pre-K Preschool/ Pre-K Flip your perception of sensory Taste/Smell: cooking activities such ( banana bread, cookies, butter) Sound: Making rain sticks (also great science :Gravity) Touch: let them create with water ( let them make soup, a car wash, a bath for animals or babies) water beads are fun too. Smell: apples/vinegar/vanilla/ cinnamon in jars ( let it be an area to explore at any time. Sight: use mirrors with dry erase markers to draw themselves use the world around you ( snow, leaves, dirt) 5 Senses Advanced 5 Senses Advanced Pictures Pictures SPD: Sensory processing disorder SPD: Sensory processing disorder Transcript: Characteristic of Pop Art POP ART Background Information Pop artists wanted to create a style of art work that could gain instant meaning from people. Recognizable imagery, drawn from popular media and products Usually bright colours Flat imagery influenced by comic books and newspaper photographs Images of celebrities or fictional characters comic books, ads, or magazines in sculpture, and innovative use of media Pop Art began as response to the mass imagery- in the form of comic books, advertising, and the "everyday objects"- expressed in the popular culture of this time. An art movement from the 1950s & 1960s in Britain Vanessa & Wei Transcript: HNH 30506 Group 5 Esther Brouwer Michelle Brouwer Marieke Verbakel Katerina Zara Supervisor: Maria Salazar Cobo The sensory profile of hummus Which one would you prefer? Introduction Chickpea = legume 135 grams 35 grams Health benefits Objective Which of the different attributes of low-chickpea content hummuses differ from the attributes of high-chickpea content hummuses, and are these related to liking of the hummuses? High > low Products Natural hummus 0,99 - 2,15 Albert Heijn Aldi Garden Gourmet Jumbo Lidl Maza Attributes Panelists Execution Execution EyeQuestion VAS Results SPSS Means ± SD Spiderplot Mixed models Significance Results Discussion Discussion Conclusion Chickpea content Liking Increasing legume intake Conclusion Fun Fact To meet your recommended weekly legume intake (135 gram) you have to eat 365 gram of the more liked low chickpea content hummus.... Fun fact Transcript: Riesling 2010, Beamsville Bench, Niagara Peninsula For Average Working Class As Weekdays Stress Buster wine. Viticulture notes Thirty Bench Vineyard & Winery is situated near the Thirty Mile Creek on the Beamsville Bench (Niagara Penninsula) The mineral rich soils and complex landscape of the Beamsville Bench provide ideal conditions for growing grapes. Sloping vineyards backed by the escarpment, moderating breezes and excellent drainage all contribute to the natural character of the wines from this appellation. viticulture notes wine makers notes Lengthy skin contact and the use of new oak barrels for fermenting and barrel aging of the reserve wines are just a few of the costly production methods they utilize. The end result is wines of superior quality. Their passion for finely crafted wines is reflected in the quality and character of the wines they create. Food Paring very ripe showing from the exceptionally warm vintage. Medium bodied with that touch of sweetness to round out the striking acidity. Flavours of white nectarine, green apple and key-lime fill the mouth This style will appeal to blackend snapper or pork chops. Interests & achievements Workplace description The origins Skills description Aging Riesling wines are often consumed when young when they make a fruity and aromatic wine . $12 to $25- Drink young or age to 6-8 years. Languages Reisling School 1 Workplace Position It's a Wonderful Life... Would Be Even Better With a Bottle of Wine. Person 2 contacts This is a young, fleshy riesling with ripe pear-peach fruit, lemon and vaguely spicy and some petrol/youthful reductive notes. Despite being only 10.8% alcohol its filled out with the ripe almost mashed fruit of the 2010 vintage, a hint of sweetness and lower acidity. But it is still balanced, and leaves a very long, lemony and tight finish. A great example of Bench riesling delivered well for under $20 Workplace References Skills Hobbies Tasting Notes Analytical Personal (cc) photo by Jakob Montrasio School 2 Skills add Personal details Person 1 contacts Position Interests School 3 Creative description Position Transcript: Wowing template. Click through in 20 steps. But we can move beyond the present. Why? Here is something small... Photo credits: 'horizon' by pierreyves @ flickr Takeaway. Bye. It could be much larger! Leadership Another An Example... You get the idea 30 Let me give you some perspetive.... One thing.. we started from here Well here we go And Finally Here is some context. Provide some common ground. Or something from the present, that we should look beyond. (Double click to edit this text box) So... Leadership So.... Description: Stand far above the stacks and stacks of flat, boring resumes on any hiring manager’s desk with a Prezi resume template. Just customize this Prezi presentation template to create your very own “Prezume” and impress them with your dynamism, coolness, and originality. Description: Show the big picture, zoom in on details, and explain clearly how it all relates with this Prezi executive brief or Prezi nonprofit template. The lively image and bold colors make it easy to create compelling, engaging executive brief or nonprofit presentations. Description: A well-organized lesson plan is the difference between getting things done and things getting out of hand. This vibrant, customizable, easy-to-use Prezi presentation template features a sticky note theme, so you'll be able to keep track of topics, assignments, exams, and more without missing a beat. Now you can make any subject more engaging and memorable
Date of Award Master of Arts (M.A.) C. E. Corbieu [?] In compiling and writing this dictionary the needs of the high school student and teacher interested in mathematics have been kept constantly in mind. The mathematics of the high school is perhaps the simplest of its kind and yet very few textbooks carefully define all technical terms as they are introduced. The thoughtful student may turn to an abridged or unabridged dictionary but will find in most cases that the definitions of the terms are vague, often misleading, and in some cases not given. The words in the vocabulary have been arranged in alphabetical order so that the reference to any one of them might be made in a convenient manner.
Standing Against Hate and Discriminatory Acts in Orange County On Friday, October 25, 2019, an incident occurred during a football game between Segerstrom and Marina High Schools in Orange County, in which two student-generated signs with racial undertones were displayed near the entrance of the stadium, aimed at students and families of Segerstrom High School. Commendably, the Santa Ana Unified School District and the Huntington Beach Union High School District (HBUHSD) addressed this incident immediately. HBUHSD and Marina High School accepted responsibility, and immediately apologized to the students, families and staff of Segerstrom High School. This was a necessary and prudent action taken by both administrations, and yet, the incident will reverberate for time to come. Discrimination is an issue that must continually be addressed at every level of society in a constant attempt to ensure that displays and acts of hate are denounced, rectified, and not repeated. I am committed to helping our community resolve any and all issues of discrimination -- both blatant and those less explicit. Whenever differences between us are used in a negative attempt to divide us as a community, we must stamp them out. The most recent incident is indicative of a larger problem that has manifested itself in recent years. Similar occurrences have been reported in 2019 at schools, athletic events, and elsewhere involving young adults: - In March, racist activity was filmed at a party involving students of Newport Beach and Costa Mesa. - In August of this year, there was an event involving Pacifica High School in the Grove Unified School District. - And a month later, in September, at a football game in predominantly white Aliso Viejo, the visitors from a predominantly Latino high school in Santa Ana were met with signs of “Build the Wall” and “We Love White,” according to the Santa Ana principal. These incidents are alarming, and unfortunately, only include those that have generated broad media and/or internet attention. The problem may be actually be worse and more pervasive. If nothing is done by leaders at all levels, their silence delivers a message of tacit approval, ensuring these events will escalate. Furthermore, these instances appear to be prevalent around our school systems and our youth, who are the most vulnerable to this infestation of ignorance. The fact that young people are either unaware of the harm that these events cause, or worse don’t care, illustrates that community leaders have not adequately educated these young perpetrators. These efforts are obviously not enough. Early in my career, I worked as an Assistant U.S. Attorney prosecuting skinheads terrorizing an African American family. When I served in the California State Assembly in the early 1990’s, I helped spearhead efforts to stiffen criminal penalties for hate crimes. Those laws increased penalties for people engaging in violent or threatening behavior based on race, gender, religion, age, disabilities or sexual orientation. They also established tougher sentences and higher civil penalties and included provisions for offenders to take ethnic sensitivity classes. As one of Orange County’s current Senators in the State Legislature, I believe it is imperative that we leaders speak with a strong, unified voice whenever instances of hate occur in the communities we represent and find common ground, build empathy, and promote respect. With this in mind, our office will be holding neighborhood discussions in the coming weeks and months to foster a sense of unity and acceptance between people from all backgrounds. These discussions will involve community businesses, schools, houses of worship, politicians, children, and members of community organizations of all kinds. Our office is also researching adding stiffer penalties and punishments for incidents in which hate escalates to threatening situations, or acts of violence. The good news is that more often than not, when hate flares up, good people rise up in response. We will address this problem, and stand up to promote tolerance and inclusion. When we come together in unity to eradicate hate, our voices are louder. If we are loud enough in our messages of unity, the young men and women of our community will hear them, learn from them, and embrace them. If we begin celebrating our commonalities and praising our differences, then we will promote a safe place for our students and our community at large. We live, work, and lead at a time when polarization and hate speech is the highest it has been in the past 50 years. The confrontations and rhetoric that have occurred nationally, but more importantly in Orange County, are something that we have the power to change. Please join me in nonpartisan discussions to find common ground, build empathy, and promote respect. It is important we come together as a community and celebrate similarities, rather than letting the rhetoric of hate and anger further the divide.
Garlic provides much more than fabulous flavor for your meals. It is also an excellent source of manganese and vitamin B6, a very good source of vitamin C and copper, and a good source of selenium, phosphorus, vitamin B1, and calcium. However, it is the sulfur compounds in garlic that really make this pungent vegetable a superstar in terms of overall health benefits. The sulfur-containing compounds in this vegetable have been shown to provide us with health advantages in a wide variety of body systems, including: our cardiovascular system, immune system, inflammatory system, digestive system, endocrine system, and detoxification system. Garlic provides us with cardiovascular benefits in a variety of different ways. In fact, so diverse are these different pathways for cardiovascular support that new studies keep discovering new ways in which garlic helps protect this body system. By far, the best researched of these pathways are the antioxidant and anti-inflammatory properties found in this allium vegetable. Chronic unwanted inflammation is, of course, a special risk within our blood vessels since it can contribute to damage of our blood vessel walls, formation of plaque, and eventual clogging of our blood vessels. A sulfur-containing compound called thiacremonone has been most closely associated with the anti-inflammatory activity of garlic in our cardiovascular system. Right alongside this anti-inflammatory support is the antioxidant support provided to our heart and blood vessels through consumption of garlic. In this antioxidant category, the many forms of cysteine found in garlic have received special research attention, as have the presence of glutathione and selenium. Of course, it is also important to remember that garlic is an excellent source of the antioxidant mineral manganese and a very good source of vitamin C, another key antioxidant nutrient. All types of health benefits from garlic combine together to make this allium vegetable helpful in supporting and helping to maintain cardiovascular health. In addition, it is worth noting that the everyday flexibility of our blood vessels has been shown to improve with intake of garlic. One exciting area of research involves the potential role of garlic to support bone health. Studies have shown that cigarette smoking increases our risk of osteoporosis, inadequate bone mineral density, and inability to heal from bone fractures. Conversely, intake of garlic may be able to reduce the risk of these problems by offsetting some of the potential damage caused by chronic exposure to cigarette smoke. Especially interesting in this context is the potential of garlic to help protect osteoblast cells from damage. Osteoblasts are bone cells that help produce new bone matrix (the intercellular substance of bone tissue). If garlic intake can help protect these cells from potential damage by cigarette smoke, it may be able to help support formation of new bone matrix and maintenance of existing bone structure. Incorporating Trѐvo into your daily diet can ensure that your body receives the amazing health benefits of garlic as well as 173 other fabulous and nutrient-dense vegetarian ingredients, all in one delicious product.
Did you know some vehicles idle for almost 3 to 4 hours a day? Can you imagine how much that can cost your fleet? Vehicle idling is an issue every fleet manager has to face, but thanks to telematics data from Anstel’s Connected Fleet solutions, it is possible to regulate this issue and bring down those expenses. Why is idle time so high? Before you know how to reduce idle time, you need to know why it exists in the first place. Each fleet is different from the other. For instance, long haul fleet drivers usually have higher idle times than those running a local route. Typical reasons for high idle time are: - Loading and unloading - Toll booths - Document processing - Stopped to use a phone - Warming up vehicle’s engine or cab - Rest stops Of course, you have to keep in mind that there are times when avoiding idling isn’t possible, such as at stoplights or stop signs. How much does it cost you? As mentioned, a large number of fleet vehicles spend at least 3 hours idling each day. But the idle time could go up to 8 hours per day as well! Excessive idling costs fleet companies almost $12,000 per truck annually. You lose a percentage point of fuel economy for every 10% of idle time. Here are the costs: Additional maintenance expenses According to the Environmental Protection Agency, excessive idling means more maintenance costs. These expenses go up by approximately $2,000 per vehicle per year. Idling causes twice the damage to internal components as compared to turning the engine off and on. Moreover, idling reduces the time needed between oil changes. More frequent maintenance also increases vehicle downtime, which has its own impact. For a commercial truck, the estimated fuel cost is around $70,000 annually, but idling wastes around 8% of those funds. The more vehicles you have in your fleet, the more you waste. Environmental & health costs Carbon dioxide generated due to the transportation industry account for more than one-third of total emissions. Increased idling diminishes air quality for drivers and the community. While eliminating emissions altogether isn’t possible, idle time can surely be reduced. How to reduce idle time? There are numerous actions you can take to reduce vehicle idle time and associated costs via Anstel’s Connected Fleet solutions. Take a look: Monitoring driver behavior Connected Fleet solutions record a lot of data about your vehicle, including how it is being handled by drivers. This includes braking, speeding, cornering, aggressive maneuvers, idling, and much more. Reckless or aggressive driving causes wear and tear to components much faster, thus increasing costs. This type of behavior, especially speeding, can boost fleet risk. If that isn’t bad enough, they also raise fuel costs and impact insurance premiums. Telematics data provides a detailed insight into the driver’s behavior. This information can be used to develop training modules, so they follow road safety rules on the road. For general fleet idling, this information helps you gauge how long your fleet idles and where. Tracking idle time Connected Fleet solutions let you set real-time alerts for driver behavior. In most cases, fleet managers use it to warn drivers if they are speeding. But alerts can just as easily work to stop idling. Telematics devices monitor engine diagnostics and have the ability to track idling time. If a vehicle idles for too long, real-timealerts can be customized and sent to the driver and manager, for stopping the engine. Idle time reports Using telematics, you can see how long a vehicle has idled, along with its location. Use these reports to understand where you can cut down on idling. It could mean getting an auxiliary unit for your vehicles. Alternatively, you might consider enforcing rules such as having drivers turn off the engine after few minutes of idling. Afterwards, you can use telematics data to ensure that drivers are following these rules. Heavy traffic and congested roads are a huge problem for fleets. Connected Fleet solutions use sophisticated algorithms to map out every possible route to a destination, and then optimize them to create the most cost-effective route. Variables such as traffic, accidents, construction, and other disruptions, are factored in during route optimization. The route can be adjusted before or even during the trip. Less traffic means less idling too! Geofencing is when you create a perimeter within which vehicles operate – you get an alert if they move in or out. Set a geofence around particular idling hot spots, and set real-time alerts, so you know exactly when vehicles enter and how they stay there. Using Anstel’s Connected Fleet solutions, it is possible to prevent and reduce idle time, by providing feedback and real-time alerts. Thus, you save quite a bit of money in the long run, while profitability isn’t impacted.
Burts Trail meanders from the Burts’ family homestead barn (lower) at Horse Lake Reserve and connects to the lower end of Glacier View Trail. It provides unique views to the north of Burch Mountain and the open spaces of the northern portion of the Reserve. Locals in Leavenworth only have to look up to know the value of Land Trust’s Mountain Home property. The property’s forested hillsides, dramatic post-fire ecology, and open ridgelines provide amazing views from the valley, instead of the eight houses that were planned. E. Lorene Young had cherished her 3.5 acre property since 1947. She shared the Wenatchee riverfront property in Leavenworth with the birds that frequented her feeders, as well as the deer, occasional black bear, and other wildlife seen regularly on this beautiful property. Hikers, runners, and bikers in the Wenatchee Valley have long known that the foothills provide amazing recreation and scenery. But in 2001, development threatened access to this local resource. The Chelan-Douglas Land Trust responded with the Save the Sage campaign, rallying local support to preserve this community asset. Working together, we accomplished the 100-year community goal of acquiring and protecting Saddle Rock forever. In 2000, the Jacobson family left a permanent legacy to the Wenatchee community when it donated 35 acres of prime shrub-steppe habitat in the Wenatchee Foothills to the Land Trust. This gift guarantees permanent community access to enjoy the beauty of the foothills. When you stand in the middle of Horse Lake Ranch, you stand in the middle of a conservation success story. Foothills North Natural Area’s 382 acres of shrub-steppe habitat provides stunning views of the Columbia and Wenatchee Rivers, vital homes for wildlife, and an important trail connection. The Fariview Canyon property connects Horse Lake Reserve with the adjoining National Forest, protecting the vital link that allows wildlife to move from the mountains to the valley. Mule deer use this migration corridor to access their winter range in the Wenatchee Foothills.
Travel Immunizations / Vaccines Some destinations have required vaccines (eg. Yellow Fever) and many have recommended vaccines (eg Typhoid). Recommendations for specific vaccines related to travel will depend on your specific itinerary, duration of travel, what activities you will engage in during travel, and your prior vaccine and health history. Vaccinations against diphtheria, tetanus, pertussis, measles, mumps, rubella, varicella, poliomyelitis, hepatitis A, hepatitis B, Haemophilus influenzae type b (Hib), rotavirus, human papillomavirus (HPV), and pneumococcal and meningococcal invasive disease are routinely administered in the United States, usually in childhood or adolescence. Influenza vaccine is routinely recommended for all people aged ≥6 months, each year. A dose of herpes zoster (shingles) vaccine is recommended for adults aged ≥60 years. If a person does not have a history of adequate protection against these diseases, those immunizations should be completed. A visit to a clinician for travel-related immunizations should be seen as an opportunity to bring an incompletely vaccinated person up-to-date on his or her routine vaccinations. *generally done in childhood
A real estate transfer tax, sometimes called a deed transfer tax, is a one-time tax or fee imposed by a state or local jurisdiction upon the transfer of real property. Usually, this is an “ad valorem” tax, meaning the cost is based on the price of the property transferred to the new owner. - 2 Who pays transfer taxes on mortgage? - 3 Do you have to pay transfer taxes on a refinance? - 4 What is the example of transfer tax? - 5 Are closing costs tax deductible? - 6 Who pays transfer fees buyer or seller? - 7 Why are transfer taxes so high? - 8 Who pays the city transfer tax? - 9 What are the three transfer taxes? - 10 Are real estate transfer taxes deductible? - 11 What are typical closing costs on a refinance? - 12 Do I have to pay closing costs when refinancing? - 13 How do you figure out transfer tax? - 14 What is transfer income tax? - 15 What is a gratuitous transfer? What are transfer taxes on a mortgage loan? Land title transfer fees Alberta (updated in 2021) $50 base + $2 for every $5000 or portion thereof of the property value. There is no land transfer tax rebate in Alberta. $50 base +$1.50 for every $5000 or part thereof of the mortgage amount. Who pays transfer taxes on mortgage? In California, the seller traditionally pays the transfer tax. Depending on local market conditions, transfer taxes can become a negotiating point during closing. For instance, in a strong seller’s market, the seller may have multiple offers and will likely find a buyer who agrees to pay the transfer tax. Do you have to pay transfer taxes on a refinance? There is zero transfer / recordation tax for refinances. What is the example of transfer tax? For example, the estate tax and gift tax are both types of transfer taxes. The estate tax entails the right to transfer property from the estate to an individual or entity after death. The capital gains tax is another example of a transfer tax involving title transfer. Are closing costs tax deductible? Can you deduct these closing costs on your federal income taxes? In most cases, the answer is “no.” The only mortgage closing costs you can claim on your tax return for the tax year in which you buy a home are any points you pay to reduce your interest rate and the real estate taxes you might pay upfront. Who pays transfer fees buyer or seller? Transfer fees are paid to a transferring attorney, appointed by the property’s seller to transfer ownership to you. This cost varies, depending on the purchase price and comprise the conveyancer’s fees plus VAT, and the transfer duty payable to SARS. Why are transfer taxes so high? So what are these fees and why are they so expensive? Transfer taxes are charges levied by various government bodies on the conveyance of homeownership from one party to another. The taxes are proportional to a home’s value, and since your home is likely your most valuable asset, those taxes can add up fast. Who pays the city transfer tax? The buyer pays for the recording, escrow, title and 50% of the city transfer taxes. Buyers in San Francisco County pay the costs for the recording, title and insurance. Sellers pay the city and county transfer tax fees. What are the three transfer taxes? There are three federal wealth transfer taxes: (1) the estate tax; (2) the gift tax; and (3) the generation-skipping transfer (GST) tax. Each wealth transfer tax has an amount that may be transferred before the respective tax is imposed. Are real estate transfer taxes deductible? You can’t deduct transfer taxes and similar taxes and charges on the sale of a personal home. If you are the buyer and you pay them, include them in the cost basis of the property. If you are the seller and you pay them, they are expenses of the sale and reduce the amount realized on the sale. What are typical closing costs on a refinance? Mortgage refinance closing costs typically range from 2% to 6% of your loan amount, depending on your loan size. National average closing costs for a refinance are $5,749 including taxes and $3,339 without taxes, according to 2019 data from ClosingCorp, a real estate data and technology firm. Do I have to pay closing costs when refinancing? Closing costs are lender fees and third-party fees you pay when getting a mortgage. You have to pay these on a refinance, just like you did on your original mortgage. Closing costs aren’t a set amount, though. How do you figure out transfer tax? How Do You Calculate Transfer Tax? Transfer tax is assessed as a percentage of either the sale price or the fair market value of the property that’s changing hands. State laws usually describe transfer tax as a set rate for every $500 of the property value. What is transfer income tax? Section 2(47) in The Income- Tax Act, 1995. (47) 5 transfer”, in relation to a capital asset, includes,- (i) the sale, exchange or relinquishment of the asset; or. (ii) the extinguishment of any rights therein; or. (iii) the compulsory acquisition thereof under any law; or. What is a gratuitous transfer? 2021-08-22 A gratuitous transfer is a transfer of property freely given, such as a gift from a donor or a bequest from an estate. … Gifts are the gratuitous transfers of property by a living donor to a donee, or beneficiary.
One limitation to developing music software has been the tight coupling of music formats to development tools. For instance, Finale plug-ins require C or C++ programming, the Humdrum toolkit requires familiarity with Unix, and MuseData tools run on TenX, a non-standard DOS environment. The tight coupling of programming environment to data representation has limited the freedom and productivity of music software developers. The promise of XML is that with the widespread availability of XML tools, MusicXML programmers can choose from a much wider range of development tools. We were delighted to see the promise become a reality during the MusicXML alpha test, where programmers developed MusicXML programs in many different environments, including: The ability to use rapid application development tools like Visual Basic to work with MusicXML makes it possible to build analysis programs using much more common development skills than the Unix expertise required for Humdrum. Good [Good 2001] illustrates this with some sample visual analysis programs that were written in half a day with Visual Basic, ActiveX controls, and MusicXML.
Uptal D. Patel, MD; Andrew Narva, MD, FACP, FASN; Eileen Newman, MS, RD; Patrick Archdeacon, Theresa Cullen, MD, MS; MD; Paul Drawz, MD; Kensaku Kawamoto, MD; Celeste Lee; Kimberly Smith, MD, MS Welcome, Uptal D. Patel, MD; HIT Working Group Chair Dr. Patel welcomed members and thanked them for their attendance and their feedback on the draft white paper, “Incorporating CKD-related data in electronic health records to improve patient care, public health surveillance, and research for patients with CKD: Recommendations from the NKDEP Health Information Technology Working Group”. White Paper Discussion, Paul Drawz, MD Dr. Drawz, lead author and Working Group member, summarized the overarching goals of the paper: - To discuss the type and structure of data needed within electronic health records (EHR) in order to improve chronic kidney disease (CKD) care, surveillance, and research; - To establish CKD as a model that proves the feasibility of EHRs for improving the care of patients with other chronic conditions; and - To encourage primary care professionals to support implementation of accessible, interoperable CKD data into EHRs. The paper focuses primarily on early CKD but also addresses some of the particularly challenging aspects of advanced CKD and the transition to end-stage renal disease (ESRD). The paper is not intended for an information technology (IT) audience, and therefore, includes only a brief summary of technological issues. Additionally, the paper focuses on data and features (e.g., flowcharts, searchable data points) that are not currently available in most EHRs rather than data (e.g., estimated glomerular filtration rate) that are available. Feedback The Working Group thanked Dr. Drawz for a very strong draft of the paper and provided the following suggestions to improve the draft: - Clarify which items/data are already available in a majority of EHRs versus those that are needed; - Explain that trend data is particularly important for CKD; - Target messages to the entire health care team—including focus on transition from primary to nephrology care—to support continuity of care and reflect collaborative care shown in the Chronic Care Model. - Ensure content speaks to hospital administrators in addition to health care providers as administrators will be important stakeholders in EHR decisions; - Clarify terminology around “forward-facing” data for patients/providers and “back-end” data for research; - Consider addressing the benefits of health IT to patients earlier in the paper; - Ensure that incorporation of CKD data into EHRs is clearly explained as a model for EHR-based management of numerous chronic diseases. - Explicitly acknowledge target audiences, particularly when a certain action requires efforts from a specific group; and - Include more explicit, bulleted “benchmarks” statements to clarify proposed actions. The Working Group needs to select a journal to target for publication. The paper touches on content relevant for primary care, nephrology, and health IT/informatics journals; however, a journal that is widely distributed among primary care providers would be best since primary care providers are the main target audience. The Working Group agreed to submit the paper to a credible, primary care provider-focused journal. The Journal of General Internal Medicine, the Annals of Internal Medicine, and Archives of Internal Medicine are possibilities. Before publication, government organizations of the public officials listed as authors may need to “clear” the paper. This will include review by the National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK), which is a relatively brief (1 or 2 day) process. If the National Institutes of Health reviews the paper, it will not need to go through Centers for Disease Control and Prevention (CDC) review channels. Tables and Figures The draft paper includes three tables/figures for the Working Group’s consideration: - Table 1: CKD related data and their clinical use - Figure 1: How HIT Can Improve CKD Care – Interactions with the Chronic Care Model - Figure 2: Example of a CKD Population Management System Embedded Within an EHR The Working Group agreed Table 1 and Figure 1 should be included. Some Working Group members expressed concern that Figure 2 might be too “in the weeds” and could distract from the paper’s main message. However, others noted Figure 2 may be necessary to help readers understand how an EHR could be designed to allow for identification of patients with CKD and specific comorbidities and complications. The Working Group ultimately decided to include Figure 2 as an online supplement, with a link to Figure 2 in the paper. There is a gap in the literature on the continuum of care as related to CKD and health IT. Since it may not be feasible to add a focus on the continuum of care in this paper, it may be a beneficial topic for the Working Group to address in the future. Additionally, addressing the continuum of care as a separate effort could better emphasize a patient focus. - Dr. Drawz to revise the paper based on feedback and recirculate to the Working Group for final review. - Dr. Narva will submit a final draft for NIDDK clearance. The Working Group discussed the draft measures for inclusion in EHRs: - BP Control in CKD - Angiotensin Antagonist in Proteinuric CKD Each measure includes an outline of the numerator and denominator as starting point. Dr. Patel will resend the draft measures and numerator/denominator outline to the group for reference. The measures still seem appropriate in light of the recently released American College of Physicians guidelines. Similar measures currently approved by the National Quality Forum (NQF) rely on administrative data rather than clinical data. In order to incorporate measures based on clinical data into EHRs, the measures must be tested in the field. Testing measures is a long, expensive process that will require support. Dr. Narva has identified contacts at the Centers for Medicare and Medicaid Services (CMS) and will contact them to see if CMS would be willing to support testing. CMS currently has a contract for testing ESRD measures but not CKD measures. However, CMS may be able to add CKD measures to either the ESRD contract or another relevant contract (e.g., ambulatory care, EHR data). The CDC also may be willing to support measure testing as related to surveillance. In addition to testing support, a testing location will need to be identified. A large, local healthcare system (e.g., Kaiser of Southern California) could be an option for testing. Dr. Cullen noted that the Veterans Administration (VA) may be able to provide some testing and data on the draft measures. Currently, the VA is reviewing LOINC codes and has lab data in its system. During the March Working Group meeting, Health eDecisions was raised as a potential avenue for testing measures via the clinical decision support (CDS) template. Health eDecisions focuses on enabling the translation of CDS interventions into implementable components in order to increase the speed and ease of adoption by the provider community. Dr. Kawamoto, initiative coordinator for Health eDecisions, reported that the Health eDecisions case one is not yet in draft regulation and that it may be best to pursue other testing avenues. However, it may be possible to use case two in the future, as a second step after initial testing. - Dr. Narva to pursue a testing location and contact CMS regarding testing support. - Dr. Patel to send draft measures and numerator/denominator outlines to the Working Group. Getting Data to Patients, Group Discussion Dr. Patel recommended that the next major Working Group focus should be getting clinical data to patients. The Working Group should consider working with large EHR vendors to provide patients with easier access to information that is most helpful/useful to the patient. The National Library of Medicine’s (NLM) MedlinePlus Connect, a free service of that allows health organizations and health IT providers to link patient portals and EHR systems to MedlinePlus, may be an avenue to pursue. Currently, NLM is working on including patient education materials from all National Institute of Health programs in MedlinePlus Connect. It may be helpful for members of the Working Group to meet with NLM to learn more about the process.
I have a complicated relationship with Nineteen Eighty-Four. To this day, it remains the only book that has ever bored so deeply into my head that I could not bring myself to finish it. This, after multiple attempts, spread across nearly 20 years of a life lived happily in the stacks of libraries and bookstores. I think about George Orwell’s novel more days than not. Sometimes I think that Nineteen Eighty-Four is the book that truly made me fall in love with language. Newspeak, the propagandic language created by the Party to limit expression and thought, permeates my own thoughts, which mentally—and hyperbolically—declare inconvenient situations as “doubleplusungood.” And yet, my life and livelihood are, for the most part, far removed from the anxiety on which the fiction of Orwell and other postwar writers honed in. The end of World War II left Western writers fearing the loss of their freedoms of speech and the press. Those fears manifested in their dystopian science fiction as verbal censorship imposed on the populace by a menacing government. Nineteen Eighty-Four is the most prominent example of this, by far, but the strict, legal regulation of language pops up in various science fiction novels and stories that follow Orwell’s. Inhabitants of Zilpha Keatley Snyder’s Green-sky have no means of expressing the negative emotions they feel, and are treated as social pariahs for being “unjoyful.” Ascians in Gene Wolfe’s The Book of the New Sun do not understand any sentence constructions that do not appear in their government-issued manuals on “Correct Thought.” Lois Lowry’s The Giver portrays a society whose emotional range has been stunted by its insistence on “precise speech.” First published in Sweden in 2012, Karin Tidbeck’s Amatka offers up a new, much more material take on language restriction—a world in which every object, from a chair to a pot of face cream, must be verbally told what it is and visibly labeled as such. In this world, a single, malleable, farmable substance—very much like the eponymous Stuff of Eighties horror fame—makes up every inanimate commodity. This substance poses an immediate threat to humanity if it is allowed to move beyond the linguistic restrictions that its manufacturers and consumers have placed upon it. Like Nineteen Eighty-Four and The Giver, Amatka has Soviet flair, both in the names given to its citizens and colonies, and in the requisite censorship of historical information, which extends even to the unmaking of people. However, this censorship serves largely to cover up the novel’s central mystery—what the “mushrooms” that make up Tidbeck’s created world really are. Early in the novel, protagonist Vanja compels her suitcase to maintain its shape by telling it what it is as she walks to her new apartment. Once settled, she realizes that her toothbrush has become unmade in her toiletry bag, leaving “[t]he bottom of the bag … coated in a thick paste.” In both cases, the labels “marking” Vanja’s belongings as specific items have been partially worn away, leading to the dissolution of the object into shapeless matter. Marking is the means by which the residents of Tidbeck’s created world control the gloop, farmed in Amatka, which they refine into varying shapes and functions. Children are taught to do this from an early age, through a memorized rhyme. Letting things disintegrate into their dangerous, unformed state is the height of childish irresponsibility. Between the “Marking Song” and the emphasis on scrapping items before they become unmade, no one in the novel’s world knows what their belongings are made of, or what will happen if they interact with them directly, without the buffer of the objects’ stamped and rigid identities. Tidbeck reinforces this separation when Vanja’s suitcase dissolves, and the reader learns that she “didn’t know what would happen if she touched” the gloop. In the earliest portions of the novel, every dissolved item warrants instant action. The dissolution of Vanja’s toothbrush is treated as little more than a mistake—careless, but nothing to be especially concerned about. When her troublesome suitcase reverts back to “whitish gloop,” however, the situation grows dire. Her lover, Nina, must call in a specialized cleaner to prevent the suitcase gloop from spreading to other items in Vanja’s room. Although the substance has “barely spread at all,” the cleaning leaves the floor deeply scarred, and results in the loss of the heroine’s bed and one of her boots. Vanja discovers that the gloop has sentience through her investigation into the disappearance of a local woman, which leads her to a set of mysterious pipes coming from underneath the outskirts of the colony. After hearing voices from the pipes, she goes to find their source—former citizens of Amatka, transformed into gloopy figures, but still conscious and capable of independent thought. After Vanja’s brief encounter with Amatka’s underground denizens, unmaking becomes desirable, even necessary. She endeavors to “[s]et the words free,” as one figure requests, and succeeds, but at the cost of her voice, which is taken from her by force. She has committed a revolutionary act, and one which leads each of Amatka’s residents to undergo a complete transformation as they integrate bodily with the gloop—a conversion she cannot make, because she can no longer declare who and what she is. Where government restricts thought in Nineteen Eighty-Four, the marking convention in Amatka prevents being. The gloop is neither a suitcase, nor a toothbrush, but it is not not those things, either. It could be, certainly, if it chose to be so, but choice has been stripped away from the sentient gloop. It has been weighed, measured, and classified. The moment it dares to become something other than what its label dictates, it is sent for the bin. The idea of a post-label society may be strange to those of us used to the way that labels like pansexual, nonbinary, and Afro-Latinx allow individuals to express their identities in more fully formed ways. Amatka conceives of a world in which everyone can simply be—and be accepted—outside the confines of particular terms. The gloop is capable of becoming anything, a point Vanja proves when she accidentally unmakes a pencil and reforms it into an approximation of a spoon, just before meeting the gloop-figures. The mysterious substance does not wish to be these items, however, and instead desires freedom from humanity’s labels—a freedom it will extend to its oppressors as well. “You’ll be everything,” one gloop-figure tells Vanja of the coming transformation. “You’ll all be everything.” Amatka ventures beyond traditional tropes of language and censorship to imagine a near-future, post-label society in which queer and multiracial people—and anyone else whose identity falls between the boxes—can live life unrestricted. Nina’s relationship with her children proves to be a critical example of this, as she—a queer woman—struggles to raise her family according to Amatka’s standards. To prevent children from becoming “dependent and less inclined to feel solidarity with the commune,” the colony restricts Nina and her co-parent, Ivar’s, access to their children to weekly visits. It’s difficult to read these sparse scenes in Amatka and not think of the discrimination that queer and polyamorous partners face when trying to raise a family, and even more so when the children are finally shipped away to the city for supposed safety reasons. Nina’s declaration at the end of the novel—“I’m fetching my children.”—only strengthens this parallel. The freedom offered by her fusion with the gloop gives one of Amatka’s central, queer characters the power to claim direction and control over her own family unit, to make it into what it can be, not what an outsider designates it to be. Tidbeck’s novel does not imagine a society in which language is dangerous or verboten, but one in which it is used for liberation instead of limitation. Finding new, more expressive words in Nineteen Eighty-Four and The Giver results individual deliverance, but this is not enough for the subjugated gloop of Amatka. Where other authors offer a rough analog of our own world as a remedy to, or a remedied version of, Oceania and The Community, Tidbeck envisions a radical shift, past our present and often problematic use of language, and into a post-label society. Like Nineteen Eighty-Four, Amatka opens on a world afraid of that which it has never tried to understand. As it follows its queer heroine, Tidbeck’s novel, like Orwell’s, moves through a society so trapped by its language that it eradicates anything which dares to be something other than what someone else has declared it to be. As the novel closes, the people of Amatka who have become one with the gloop begin a march on the capitol, intent on liberating all of its residents, human and gloop alike. It’s a rare and beautiful message from a Soviet-esque dystopia, and one that carries hope—not found in Winston Smith’s final, adoring love for Big Brother—for anyone who finds themself existing, or yearning to exist, beyond the margins.
Staff Sgt. Ray C. Hunt was a mechanic in the Army Air Corps when the Japanese surprise attack across the Pacific on Dec. 7, 1941, dragged him into World War II. He was soon captured, escaped the Bataan Death March that killed thousands, and then led guerrilla forces against the Japanese for the rest of the war. Hunt is one of history’s true reluctant heroes. He joined the Army Air Corps in 1939 partially to avoid duty in the infantry if the war in Europe eventually swept up the United States. He rose to the rank of staff sergeant as a mechanic and was an expert in the Curtiss P-40 Warhawk fighter. When the Japanese attacked at Pearl Harbor on Dec. 7, Hunt’s base in the Philippines was hit just a few hours later. Because Hunt was west of the International Date Line, his base experienced the attack in the early hours of December 8. The mechanic and other members of his unit were in the field sleeping in foxholes when the attack began, but still suffered losses as bombs and rounds from aircraft pelted their positions. For troops in the Philippines, that wasn’t the end of the attack. The Imperial Japanese followed up air attacks with amphibious landings and invasion. America defaulted to its old War Plan Orange in the Philippines which called for a fierce defense of Bataan Peninsula. Hunt and others created a hidden airfield in the jungle and recovered their planes which flew missions against Japan. But the defense was doomed from the start by a lack of true combat troops and the decision not to reinforce the defenders. The Japanese launched a Death March to move captured Americans to prison camps, and many U.S. service members died in the forced march so brutal that its organizer was executed for war crimes. Luckily, Hunt and a few others were able to escape the march alive. In the jungle, Hunt recruited a small group of fighters and began operating under Lt. Robert Lapham, another American turned Filipino guerrilla leader. As the war ground on, the resistance in the Philippines spent most of its time gathering intelligence and moving constantly, though they did launch harassing attacks when possible. Hunt was promoted to captain by the guerrillas and given command of a large group of fighters which eventually grew to 3,400. Their finest hour came in the five days before the American invasion of Luzon when they launched a massive campaign to prepare the island for American landings in what was called “Operations Plan 12.” The guerrillas received their orders on Jan. 4, 1945. The American relayed the order to begin operations to the other company commanders and then took his men on an assault against a Japanese encampment. While the overzealous guerrillas launched such a slapdash attack that Hunt called it a “Marx Brothers battle,” it managed to cause extensive damage and kill some of the Japanese defenders. The best part for the guerrillas was that when the Japanese troops found their bullet casings dated 1942 and 1943, they assumed that they had been attacked by paratroopers and so began to focus on the possibility of constant attacks, degrading their morale and readiness. Hunt and his men spent the following days collecting intelligence and harassing the Japanese as they withdrew to defensive positions. When the invasion came on Jan. 9, they were ordered to stay in their position. This put them in the perfect place to attack Japanese forces falling back east from the main American attackers in the west. Hunt’s men would later be credited with 3,000 kills in those crucial five days preceding the invasion. The American leadership accepted Hunt’s promotion to captain and ordered him to rejoin American forces. He went on horseback with 15 of his fighters to the headquarters and briefed other officers on the disposition of guerrilla and Japanese forces, often accidentally speaking in Filipino dialects because he was no longer used to speaking English. Hunt voluntarily remained in the Philippines for a few more months to support the American invasion and was personally pinned with a Distinguished Service Cross by Gen. Douglas MacArthur who thanked Hunt and others for remaining in the Philippines and serving American interests for three years.
The first case of a new and potentially more infectious strain of Covid-19 has been confirmed in the United States, Colorado health officials said Tuesday. The health officials confirmed the case and notified the Centers for Disease Control and Prevention. The infected individual, a man in his 20s, does not have a history of traveling and is in isolation in Elbert County, about an hour and a half south of Denver, officials said. “There is a lot we don’t know about this new Covid-19 variant, but scientists in the United Kingdom are warning the world that it is significantly more contagious,” Colorado Gov. Jared Polis said. “The health and safety of Coloradans is our top priority and we will closely monitor this case, as well as all COVID-19 indicators, very closely.” “We are working to prevent spread and contain the virus at all levels,” Polis said, adding that public health officials were working to identify other potential cases through contact tracing interviews. Preliminary analysis of the mutated strain, first identified in the U.K., suggests it may be the culprit behind Britain’s recent spike in cases. The new strain, referred to as SARS-CoV-2 VUI 202012/01, could be as much as 70% more transmissible, British Prime Minister Boris Johnson said. The CDC said in December that the new strain could already be circulating in the U.S. without notice. The CDC cited ongoing travel between the U.K. and the U.S. as an explanation for the potential arrival of the new variant. The discovery of the strain in Britain sparked border closures in European countries like Ireland, France, Belgium and Germany as well as countries outside the continent. The Trump administration does not plan to impose Covid-19 screenings for passengers arriving at U.S. airports from the United Kingdom, Reuters reported, citing officials familiar with the matter. The British government confirms that another infectious variant of the coronavirus identified in South Africa has also emerged in the United Kingdom. The strain from South Africa has not yet been identified in the United States. President Donald Trump’s coronavirus vaccine czar, Moncef Slaoui, said earlier in December that the Pfizer and Moderna Covid-19 shots should be effective against new strains.
Apophatic theology (from Ancient Greek: ἀπόφασις via ἀπόφημι apophēmi, meaning "to deny"), also known as negative theology, via negativa or via negationis (Latin for "negative way" or "by way of denial"), is a type of theological thinking that attempts to describe God, the Divine Good, by negation, to speak only in terms of what may not be said about the perfect goodness that is God. It stands in contrast to cataphatic theology. An example occurs in the assertion of the 9th-century theologian John Scotus Erigena: "We do not know what God is. God Himself does not know what He is because He is not anything [i.e. "not any created thing"]. Literally God is not, because He transcends being." When he says "He is not anything" and "God is not", Scotus does not mean that there is no God, but that God cannot be said to exist in the way that creation exists, i.e. that God is uncreated. He is using apophatic language to emphasise that God is "other". In brief, negative theology is an attempt to clarify religious experience and language about the Divine through discernment, gaining knowledge of what God is not (apophasis), rather than by describing what God is. The apophatic tradition is often, though not always, allied with the approach of mysticism, which focuses on a spontaneous or cultivated individual experience of the divine reality beyond the realm of ordinary perception, an experience often unmediated by the structures of traditional organized religion or by the conditioned role-playing and learned defensive behavior of the outer man. Apophatic description of God In negative theology, it is accepted that experience of the Divine is ineffable, an experience of the holy that can only be recognized or remembered abstractly. That is, human beings cannot describe in words the essence of the perfect good that is unique to the individual, nor can they define the Divine, in its immense complexity, related to the entire field of reality. As a result, all descriptions if attempted will be ultimately false and conceptualization should be avoided. In effect, divine experience eludes definition by definition: - Neither existence nor nonexistence as we understand it in the physical realm, applies to God; i.e., the Divine is abstract to the individual, beyond existing or not existing, and beyond conceptualization regarding the whole (one cannot say that God exists in the usual sense of the term; nor can we say that God is nonexistent). - God is divinely simple (one should not claim that God is one, or three, or any type of being.) - God is not ignorant (one should not say that God is wise since that word arrogantly implies we know what "wisdom" means on a divine scale, whereas we only know what wisdom is believed to mean in a confined cultural context). - Likewise, God is not evil (to say that God can be described by the word 'good' limits God to what good behavior means to human beings individually and en masse). - God is not a creation (but beyond that we cannot define how God exists or operates in relation to the whole of humanity). - God is not conceptually defined in terms of space and location. - God is not conceptually confined to assumptions based on time. Even though the via negativa essentially rejects theological understanding in and of itself as a path to God, some have sought to make it into an intellectual exercise, by describing God only in terms of what God is not. One problem noted with this approach is that there seems to be no fixed basis on deciding what God is not, unless the Divine is understood as an abstract experience of full aliveness unique to each individual consciousness, and universally, the perfect goodness applicable to the whole field of reality. It should be noted however that since religious experience—or consciousness of the holy or sacred, is not reducible to other kinds of human experience, an abstract understanding of religious experience cannot be used as evidence or proof that religious discourse or praxis can have no meaning or value. In apophatic theology, the negation of theisms in the via negativa also requires the negation of their correlative atheisms if the dialectical method it employs is to maintain integrity. The fourth-century Cappadocian Fathers stated a belief in the existence of God, but an existence unlike that of everything else: everything else that exists was created, but the Creator transcends this existence. The essence of God is completely unknowable; mankind can know God only through His energies. Apophatic theology found its most influential expression in works such as those of Pseudo-Dionysius the Areopagite and Maximus the Confessor; in his Summa Theologica, Thomas Aquinas quotes Pseudo-Dionysius 1,760 times. Augustine of Hippo defined God aliud, aliud valde, meaning "other, completely other", in Confessions 7.10.16. In contrast, making positive statements about the nature of God, which occurs in most Western forms of Christian theology, is sometimes called cataphatic theology. Eastern Christianity makes use of both apophatic and cataphatic theology. Adherents of the apophatic tradition in Christianity hold that, outside of directly-revealed knowledge through Scripture and Sacred Tradition (such as the Trinitarian nature of God), God in His essence is beyond the limits of what human beings (or even angels) can understand; He is transcendent in essence (ousia). Further knowledge must be sought in a direct experience of God or His indestructible energies through theoria (vision of God). In Eastern Christianity, God is immanent in his hypostasis or existences. Negative theology played an important role early in the history of Christianity, for example, in the works of Clement of Alexandria. Three more theologians who emphasized the importance of negative theology to an orthodox understanding of God were Gregory of Nyssa, John Chrysostom, and Basil the Great. John of Damascus employed it when he wrote that positive statements about God reveal "not the nature, but the things around the nature." It continues to be prominent in Eastern Christianity (see Gregory Palamas). Apophatic statements are crucial to many modern theologians in Orthodox Christianity (see Vladimir Lossky, John Meyendorff, John S. Romanides and Georges Florovsky). In Orthodox theology, apophatic theology is taught as superior to cataphatic theology. While Aquinas felt positive and negative theology should be seen as dialectical correctives to each other like thesis and antithesis which produces a synthesis, Lossky argues, based on his reading of Dionysius and Maximus Confessor, that positive theology is always inferior to negative theology which is a step along the way to the superior knowledge attained by negation. This is expressed in the idea that mysticism is the expression of dogmatic theology par excellence. Negative theology has a place in the Western Christian tradition as well, although it is definitely much more of a counter-current to the prevailing positive or cataphatic traditions central to Western Christianity. For example, theologians like Meister Eckhart and St. John of the Cross (San Juan de la Cruz), mentioned above, exemplify some aspects of or tendencies towards the apophatic tradition in the West. The medieval work, The Cloud of Unknowing and St. John's Dark Night of the Soul are particularly well known in the West. C. S. Lewis, in his book Miracles, advocates the use of negative theology when first thinking about God, in order to cleanse our minds of misconceptions. He goes on to say we must then refill our minds with the truth about God, untainted by mythology, bad analogies or false mind-pictures. Ivan Illich, the historian and social critic, can be read as an apophatic theologian, according to a longtime collaborator, Lee Hoinacki, in a paper presented in memory of Illich, called "Why Philia?" While negative theology is used in Christianity as a means of dispelling misconceptions about God, and of approaching Him beyond the limits of human reasoning, most common doctrine of western Christianity is taken to involve positive claims: that God exists and has certain positive attributes, even if those attributes are only partially comprehensible to us. In Greek philosophy The ancient Greek poet Hesiod has in his account of the birth of the gods and creation of the world (i.e., in his Theogony) that Chaos begot the primordial deities: Eros, Gaia (Earth) and Tartarus, who begot Erebus (Darkness) and Nyx (Night), and Plato echoes this genealogy in the Timaeus 40e, 41e where the familiar Titan and Olympian gods are sired by Heaven and Earth. Nevertheless, Plato is far from advocating a negative theology. His Form of the Good (identified by various commentators with the Form of Unity) is not unknowable, but rather the highest object of knowledge (The Republic 508d–e, 511b, 516b). Plotinus was the first to propose negative theology. He advocated it in his strand of neoplatonism (although he may have had precursors in neopythagoreanism and middle Platonism). In his writings he identifies the Good of the Republic (as the cause of the other Forms) with the One of the first hypothesis of the second part of the Parmenides (137c–142a), there concluded to be neither the object of knowledge, opinion or perception. In the Enneads Plotinus writes: "Our thought cannot grasp the One as long as any other image remains active in the soul…To this end, you must set free your soul from all outward things and turn wholly within yourself, with no more leaning to what lies outside, and lay your mind bare of ideal forms, as before of the objects of sense, and forget even yourself, and so come within sight of that One." Apophatic movements in Hinduism are visible in the works of Shankara, a philosopher of Advaita Vedanta school of Indian philosophy, and Bhartṛhari, a grammarian. While Shankara holds that the transcendent noumenon, Brahman, is realized by the means of negation of every phenomenon including language; Bhartṛhari theorizes that language has both phenomenal and noumenal dimensions, the latter of which manifests Brahman. The standard texts of Vedanta philosophy, to which Shankara also belonged, were the Upanishads and the Brahma Sutras. An expression of negative theology is found in the Brihadaranyaka Upanishad, where Brahman is described as "neti-neti" or "neither this, nor that". Further use of apophatic theology is found in the Brahma Sutras, which state: In Advaita, Brahman is defined as being Nirguna or without qualities. Anything imaginable or conceivable is not deemed to be the ultimate reality. The Taittiriya hymn speaks of Brahman as "one where the mind does not reach". Yet the Hindu scriptures often speak of Brahman's positive aspect. For instance, Brahman is often equated with bliss. These contradictory descriptions of Brahman are used to show that the attributes of Brahman are similar to ones experienced by mortals, but not the same. Negative theology also figures in the Buddhist and Hindu polemics. The arguments go something like this – Is Brahman an object of experience? If so, how do you convey this experience to others who have not had a similar experience? The only way possible is to relate this unique experience to common experiences while explicitly negating their sameness. In other Eastern traditions Many other East Asian traditions present something very similar to the apophatic approach: for example, the Tao Te Ching, the source book of the Chinese Taoist tradition, asserts in its first statement: the Tao ("way" or "truth") that can be described is not the constant/true Tao. The Arabic term for "negative theology" is lahoot salbi, which is a "system of theology" or nizaam al lahoot in Arabic. Different traditions/doctrine schools in Islam called Kalam schools (see Islamic schools and branches) use different theological approaches or nizaam al lahoot in approaching God in Islam (Allah, Arabic الله) or the ultimate reality. The lahoot salbi or "negative theology" involves the use of ta'til, which means "negation," and the followers of the Mu'tazili school of Kalam, founded by Imam Wasil ibn Ata, are often called the Mu'attili, because they are frequent users of the ta'til methodology. Shia Islam is another sect that adopted "negative theology". Most adherents to the Mušabbiha sect reject this methodology and adhere to it's opposite. They believe that the Attributes of God such as "Hand", "Foot" etc... should be taken literally and hence believe that God is like a human being. Most Sunnis, who are Salafi, Ash'ari, and Maturidi, however adhere to a middle path between negation and anthropomorphism. In Jewish belief, God is defined as the Creator of the universe: "In the beginning God created the heaven and the earth" (Genesis 1:1); similarly, "I am God, I make all things" (Isaiah 44:24). God, as Creator, is by definition separate from the physical universe and thus exists outside of space and time. God is therefore absolutely different from anything else, and, as above, is in consequence held to be totally unknowable. It is for this reason that we cannot make any direct statements about God. (See Tzimtzum (צמצום): the notion that God "contracted" his infinite and indescribable essence in order to allow for a "conceptual space" in which a finite, independent world could exist.) Bahya ibn Paquda shows that our inability to describe God is similarly related to the fact of His absolute unity. God, as the entity which is "truly One" (האחד האמת), must be free of properties and is thus unlike anything else and indescribable; see Divine simplicity. This idea is developed fully in later Jewish philosophy, especially in the thought of the medieval rationalists such as Maimonides and Samuel ibn Tibbon. It is understood that although we cannot describe God directly (מצד עצמו) it is possible to describe Him indirectly via His attributes (תארים). The “negative attributes” (תארים שוללים) relate to God Himself, and specify what He is not. The “attributes of action” (תארים מצד פעולותיו), on the other hand, do not describe God directly, rather His interaction with creation . Maimonides was perhaps the first Jewish Thinker to explicitly articulate this doctrine (see also Tanya Shaar Hayichud Vehaemunah Ch. 8): God's existence is absolute and it includes no composition and we comprehend only the fact that He exists, not His essence. Consequently it is a false assumption to hold that He has any positive attribute... still less has He accidents (מקרה), which could be described by an attribute. Hence it is clear that He has no positive attribute however , the negative attributes are necessary to direct the mind to the truths which we must believe... When we say of this being, that it exists, we mean that its non-existence is impossible; it is living — it is not dead; ...it is the first — its existence is not due to any cause; it has power, wisdom, and will — it is not feeble or ignorant; He is One — there are not more Gods than one… Every attribute predicated of God denotes either the quality of an action, or, when the attribute is intended to convey some idea of the Divine Being itself — and not of His actions — the negation of the opposite. (The Guide for the Perplexed, 1:58) In line with this formulation, attributes commonly used in describing God in rabbinic literature, in fact refer to the "negative attributes" — omniscience, for example, refers to non-ignorance; omnipotence to non-impotence; unity to non-plurality, eternity to non-temporality. Examples of the “attributes of action” are God as creator, revealer, redeemer, mighty and merciful . Similarly, God’s perfection is generally considered an attribute of action. Joseph Albo (Ikkarim 2:24) points out that there are a number of attributes that fall under both categories simultaneously. Note that the various Names of God in Judaism, generally, correspond to the “attributes of action” — in that they represent God as he is known. The exceptions are the Tetragrammaton (Y-H-W-H) and the closely related "I Am the One I Am" (אהיה אשר אהיה — Exodus 3:13–14), both of which refer to God in his "negative attributes", as absolutely independent and uncreated; see "Names of God in Judaism". Since two approaches are used to speak of God, there are times when these may conflict, giving rise to paradoxes in Jewish philosophy. In these cases, two descriptions of the same phenomenon appear contradictory, whereas, in fact, the difference is merely one of perspective: one description takes the viewpoint of the "attributes of action" and the other, of the "negative attributes". See the paradoxes described under free will, Divine simplicity and Tzimtzum. - Lossky (1997), The Vision of God, Crestwood, N.Y.: SVS Press, pp. 36–40, ISBN 0-913836-19-2 - Papanikolaou, Aristotle (2006), Being With God: Trinity, Apophaticism, and Divine–Human Communion (1st Edition), Notre Dame, Indiana:University of Notre Dame Press, p. 2, ISBN 978-0-268-03830-4 - Lossky, The Mystical Theology of the Eastern Church p. 9 - Smith, Gregory B. (2008). Between Eternities: On the Tradition of Political Philosophy, Past, Present, and Future. Latham MD: Lexington Books. p. 199. ISBN 0739120778. Retrieved 16 August 2015. - LA Times: Jack Miles. Faith and Belief: 'The Evolution of God' by Robert Wright and 'The Case for God' by Karen Armstrong - Hoinacki, Lee - McGrath, Alister E. (2011). Christian Theology: An Introduction, 5th Edition. Chichester: John Wiley & Sons. p. 188. ISBN 9781444335149. Retrieved 16 August 2015. - Coward, Harold G. and Foshay, Toby. Derrida and Negative Theology. State University of New York, 1992. P. 21. ISBN 0-7914-0964-3. - Tharaud, Barry. Emerson for the Twenty-First Century: Global Perspectives on an American Icon. Rosemont Publishing and Printing Corp, 2010. p. 453. ISBN 978-0-87413-091-1. - Verse III.2.22, Brahma-Sutra, Translated by Swami Gambhirananda. - Renard, John. Responses to One Hundred One Questions on Hinduism. Paulist Press, 1999. P. 75. ISBN 0-8091-3845-X. - Hart, David Bentley (2013). The Experience of God: Being, Consciousness, Bliss. New Haven and London: Yale University Press. p. 142. ISBN 9780300166842. Retrieved 16 October 2016. - Note that, alternatively, the construct of God incorporating all of reality is also offered in some schools of Jewish mysticism. Notably, in the Tanya (the Chabad Lubavitch book of wisdom), it is stated that to consider anything outside of God is tantamount to idolatry. The paradox that this introduces is noted by Chabad thinkers (how can an entity be a creator of itself), but the resolution is considered outside of the potential realm of human understanding. Franke, William. A Philosophy of the Unsayable. Notre Dame: University of Notre Dame Press, 2014. Karahan, Anne. “The Image of God in Byzantine Cappadocia and the Issue of Supreme Transcendence”. In: Papers presented at the Sixteenth International Conference on Patristic Studies held in Oxford 2011 (Studia Patristica LIX, vol. 7 (2013): 97-111). Eds. A. Brent and M. Vinzent. Leuven: Peeters Publishers 2013. Karahan, Anne. “The Issue of περιχώρησις in Byzantine Holy Images”. In: Papers presented at the Fifteenth International Conference on Patristic Studies held in Oxford 2007 (Studia Patristica XLIV-XLIX: pp. 27–34) Eds. J. Baun, A. Cameron, M. Edwards, and M. Vinzent. Leuven: Peeters Publishers 2010. Keller, Catherine. Cloud of the Impossible Cloud of the Impossible: Negative Theology and Planetary Entanglement. New York: Columbia University Press, 2015. Wolfson,Elliot R. Giving Beyond the Gift: Apophasis and Overcoming Theomania. New York: Fordham University Press, 2014.
The book that mainly influenced my approach to therapy is "Man's Search For Meaning" by Viktor E. Frankl. His book outlines how we can find purpose and meaning from all circumstances . I believe the biggest barrier for people seeking mental health care is the negative cogitation associated with "therapy". Working with a professional counselor is extremely beneficial to an overall healthy and meaningful quality of life. The misperception is that counseling is only for certain people with particular challenges. Seeking counseling services is a strong and proactive show of character. Treatment plan goals and objectives are a collaborative conversation discussed throughout the therapeutic process. These goals and objectives are flexible, which reinforces the importance of adapting to change. Depression often causes people to feel sad, empty, or hopeless, and can cause a lack of interest in life. It can also affect a person's thinking patterns and physical health. Anxiety can mean nervousness, worry, or self-doubt. Anxiety disorder is a mental health disorder that entails excessive, repeated bouts of worry, anxiety, and/or fear. Distress stems from a subjective perception of something being unwanted, undesirable, or detrimental to your wellbeing. Excessive stress significantly impairs mental and physical health and is associated with many diseases and conditions. Trauma is the result of experiencing a perceived, extremely distressful event. Although the stress threshold for each person differs, meaning that each person considers and experiences trauma differently, it is an event that tops one’s threshold. It exceeds one’s ability to cope or emotionally process. Symptoms may include shock, anxiety, confusion, hopelessness, feeling disconnected, mood swings, nightmares, and intrusive thoughts. Designed to help people choose, change, or leave a career at any stage of life. Careers are often wrapped up in people’s perceived identity, therefore, any change can cause anxiety and/or depression. Workplace issues are a common source of stress and can include interpersonal conflict, communication problems, gossip, harassment, discrimination, low motivation and job satisfaction, performance issues, and poor job fit. Refers relationship issues with a partner or spouse. Can include issues related to relationship distress, relationship satisfaction, communication, intimacy, etc. Self-esteem is the degree to which a person feels confident, valuable, and worthy of respect. Feeling low self-esteem can influence overall well-being and be linked to anxiety and/or depression. Social anxiety or social phobia is fear of social situations or a fear of interacting with people other than close friends and family. Social anxiety can be persistent, intense, and debilitating, greatly affecting daily life.
The US Federal Reserve raised interest rates on 16 March, announcing the end of the monetary easing policy that began in March 2020. Emerging economy countries are once again facing the risk of capital flight, currency devaluation, an increased burden of foreign debt in US dollars, and even systematic crises in their balance of payments. Yang Shuai and Wen Tiejun analyse how the US dollar hegemonic financial system works globally and its impact on China and point out that China urgently needs to form an autonomous sovereign currency issuance mechanism based on "ecological resources". - The dollar’s position as the default international currency has enabled the US to dump dollars into global markets through the purchase of goods including raw materials, energy, and products; the dollars then flow back to the US to support the US's domestic capital markets by developing countries' reinvestment of dollar and dollar-denominated assets like US treasuries. The US relies on its military superiority and monetary power to fleece the global community (全球剪羊毛 quánqiú jiǎn yángmáo), especially the Global South, and also to receive huge seigniorage, government revenue created by issuing currency. While at the same time, developing countries also pay the social and environmental costs due to their dependency on the export of natural resources and low-labor cost products. - Due to China's export-oriented dominated economy in the past, China has gradually developed a currency issuance system anchored to foreign exchange reserves. The central bank has to issue money to absorb the over-supplied foreign exchange and this causes an increase in the money supply. At its peak (2014), China's foreign exchange reserves accounted for 80 percent of the total assets of the central bank. The currency issuance system not only makes the central bank's monetary policy susceptible to foreign exchange flows, but also creates a huge surplus of liquidity absorbed by the real estate sector, leading to the financialization of the economy. - Unlike Western economies, China has experience in relying on real assets to issue sovereign currency and to respond to crises. For example, during the War of Resistance Against Japanese Aggression (1937-1945), for every 10,000 yuan of currency issued in the Shandong base areas, at least 5,000 yuan was used to purchase and store important materials such as grain and cotton, and the currency was recycled and put into circulation through the sale and purchase of materials. The relative stability of currency issuance and circulation was successfully maintained. - China's huge stock of spatial ecological resources, a system composed of mountains, rivers, forests, farmlands, and grasslands, can be used as a reliable 'anchor' for currency issuance, i.e., to convert ecological assets into central bank assets by issuing bonds. This is designed to form a currency issuance mechanism based on the value of ecological resources owned by rural collective economic organizations, and to establish a three-tier market led by three transaction parties to promote the monetization of ecological resources, including farmers' collective economic organizations, county and township level operational companies, and outside investors. The authors argue that the transformation of China's money supply system is urgent, and that the "materials standard" is an applicable experience in maintaining stable money issuance in contemporary Chinese financial history. In the construction of a three-tier market for ecological resources and currency issuance, a constructive cycle of ecological development, ecological consumption, and ecological currency supply can be formed. This will enable the construction of an ecological civilization and the rural revitalization strategy to be promoted in a complementary manner. At the same time, it will develop and strengthen the new collective economy, consolidate the ownership foundation of the basic socialist economic system, and organically combine the issuance of sovereign currency with the monetization of sovereign resources, thus laying a solid foundation for dealing with external pressures. Amid the stalemate between Russia and Ukraine, Chinese Foreign Minister Wang Yi began his visit to Pakistan, Afghanistan, India, and Nepal on 24 March. It was his first visit to India since its border conflict with China in 2020. The frequent exchange of visits by China to countries in the "intermediate zone", referring to Asia, Africa, and Latin America has attracted attention by observers. The article, by Teng Jianqun and Wei Honglang, analyses the historical legacy of Mao Zedong's idea of the 'intermediate zone' and how China has sought to break out of the US global expansion of its sphere of influence. - In the 1960s, Mao developed his concept of the '"intermediate zone", referring to the vast economically underdeveloped countries in Asia, Africa, and Latin America. In the 1970s, Mao proposed the "three worlds" theory. Unlike the Intermediate Region or Three-World Model proposed by the Western strategists, Mao's aim was to "unite the vast number of Third World countries and unite the Second World countries to form the broadest international united front against hegemony". - Since the 1950s, Chinese diplomacy has sought to break through in the "intermediate zone" to form a united front against the superpowers . - Since President Obama's "Asia-Pacific rebalancing" strategy, US administrations have strengthened their control over Japan and South Korea, directly leading to a deterioration in China's relations with these countries. With Biden's election, the focus has been on military relations with other NATO members. - The US has also been active in bringing India, which maintains a classic non-aligned posture, into the fold, and putting pressure on ASEAN. In Latin America, the US government has made statements discrediting China and Russia, while suppressing left-wing governments in Latin America. In Africa, the US also uses false narratives such as the "debt trap theory", "resource plunder theory", and "neo-authoritarianism theory" to unite with other Western countries against China. - The Biden administration has expanded its sphere of influence by various means, returning to multilateral international mechanisms, and depriving China of its voice in various international spheres. One example was to pressure WHO to smear China by insisting they investigate the source of the epidemic. It also includes holding "democracy summits" and other events that bring countries together with so-called shared values, further strengthening control over the Allies, frequent visits of officials to Southeast Asia, and provocation of Russian-Chinese relations. In the face of the US policy of global hegemonic expansion, the authors suggest that China's foreign policy should still follow Mao's international relations strategy as a reference and starting point, refine its methods of work to win over the "intermediate zone" countries, stabilize its relations with neighboring countries, further strengthen its "new era of comprehensive strategic partnership" with Russia, and strengthen existing multilateral mechanisms and regional organizations. The Russian-Ukrainian conflict has attracted global attention, with the embassies of Russia, Ukraine, France, the UK, and many other countries in China actively speaking out on Chinese social media platforms, making the arena of Chinese public opinion an intense online battleground in the Russian-Ukrainian conflict. It has also become, however, a tool for anti-China activists to distort the country's image. In order to help our readers understand the facts, this article exposes a Twitter account called "The Great Translation Movement" (大翻译运动 dà fānyì yùndòng) that has been used to smear China by cherry-picking extreme Chinese online rhetoric about the Russian-Ukrainian conflict. - The Great Translation Movement was started by ChongLangTV, the largest Chinese-language community on Reddit with 50,000 followers, which takes an anti-communist and anti-China stance and also publicly disseminates personal information of people it dislikes on foreign websites. - Other members of the Movement range from people supposedly seeking "asylum" in Taiwan to those in Australia claiming to expose the "truth about China". Their content is also often preferred by some Western anti-China media correspondents in China. - The Movement deliberately selects specific words of individuals, even false ones, to inflame conflicts, which contributes to the inciting of hatred against Chinese people and other Asians. - The Movement began with translating discussions and opinions on the Russian-Ukrainian war from the Chinese Internet into English, and then reposting them outside of China. The main goal is to spread the message that "the Chinese are a collection of proud, arrogant, populist, cruel, bloodthirsty, and unsympathetic people" as one of members said in an interview. - The Movement has begun to move away from the Russian-Ukrainian conflict and towards controversial aspects of China's relationships with Japan and South Korea. - The essence behind the Movement is a part of the "peaceful evolution" strategy, referring to the attempt to transform China's socialist system by allegedly peaceful means, primarily by the US. The author points out that the polarization of some individuals' words and actions through social media is a usual phenomenon in the online world. However, it is worth noting that, with the support of various forces outside China, this so-called Great Translation Movement may not be a short-term phenomenon, but rather a struggle for public opinion with a new twist. The Western promotion of "democracy and freedom" and other "values" to the Chinese people of the recent past has now become a campaign to spread misinformation and foment hatred towards China around the world. Since the Covid-19 outbreak, the social governance approach with Chinese characteristics has received worldwide attention in the fight against the epidemic. The task remains arduous in the rural areas where people are generally less aware of the epidemic and health precautions. Guo Miao and Hao Jing found that the traditional loudspeakers (大喇叭 dà lǎ bā), first used during the Chinese revolutions, have played an important role in the prevention and control of the epidemic in rural areas and served as an effective means of information dissemination for grassroots governance with Chinese characteristics. - The loudspeaker, like newspaper and television, represents the voice of a government that is present and credible. During the pandemic, it has been disseminating information effectively, dispelling rumors in a timely fashion, and avoiding the polarization of online opinions and behaviors in rural areas. - Radio loudspeakers are used by village cadres to disseminate information to villagers, such as early warning messages and policy interpretations. It increases the breadth and depth of villagers' political participation, eases social or political conflicts, and stabilizes the political situation through an orderly grassroots governance . - Village cadres use diversified forms such as dialects, jingles, and chit-chat to disseminate the latest developments of the epidemic, which have helped the audience to understand the content of the message and raised their awareness of epidemic precaution. For example, in Chengdu, Sichuan province, jingles were used to discourage mahjong playing and advocate home quarantine; while in Shuangyashan, Heilongjiang province, residents were asked to abide by the rules of the epidemic in the form of traditional narrative singing, or clapper talk (快板 kuàibǎn). - Social media platforms record, edit, and spread the content of loudspeakers, which is also an effective online presentation of China's rural governance. This has increased the public's trust and recognition of the country's grassroots governance. The authors point out that preventing the spread of Covid-19 in rural areas is a major priority during the pandemic outbreak and it is extremely important to help maintain two-way and transparent communication. The loudspeaker style of communication has not only enhanced the image of the grassroots government as pro-people, responsible, and effective, but has also met public needs for information, guided public opinion effectively, and prevented potential social risks. The Communist Party of China's series of theories on mass work (群众工作 qúnzhòng gōngzuò), having close contact with the masses, and the wholehearted service to the people, are key to its continuous success in leading the Chinese revolution and building socialism. They are also the result of the application of the basic principles of Marxism to the Chinese reality by the leaders represented by Mao Zedong. In their article, Ai Silin and Kang Peizhu review how the idea of the mass line (群众路线 qún zhòng lù xiàn) was formed and developed step-by-step from the early days of the founding of the Party (1921) to the period of socialist revolution and construction in China (1949-1979). - During the Chinese Great Revolution (大革命 dà gé mìng, 1925-1927), the prevailing ideological trend, represented by key leaders within the Party overlooked the peasants and focused only on the workers' movement. However, Mao Zedong, by examining the peasant movement in Hunan, concluded that the peasants were the most reliable allies of the Party and the revolution. In 1930, he wrote "Oppose Book Worship", which proposed that the correct revolutionary strategy of the Party could only be summarized from the practical experience of mass struggle. - During the period of the Agrarian Revolution(1927-1937), it was necessary to unite the masses to build the revolutionary base in Jinggang Mountain (井冈山 jǐng gāng shān) and to break the military "siege" of the Kuomintang. Mao pointed out that "all the problems faced by the masses" should be put on their agenda. "We must make the masses realize that we represent their interests and breathe the same as they do." - During the War of Resistance against Japanese Aggression (1937), Mao believed that victory would be achieved by fighting a protracted war (持久战 chí jiǔ zhàn), and proposed the "army and the people as the foundation of victory". Mao also developed the fundamental method of "coming from the masses and going to the masses" based on Marxist epistemology. - After the founding of the People's Republic of China, Mao's mass line was further developed. He unified democratic centralism with the mass line, advocated the "two participations, one reform and three-in-one unity" (两参一改三结合 liǎng cān yī gǎi sān jié hé) and the system of workers' congresses, and fiercely attacked the detachment from the masses, the desire for pleasure, and the bureaucracy of Party members and cadres after CPC came to power. The mass line is the lifeline and foundation of CPC. As early as the Revolutionary period, Mao proposed that "the masses are the real bronze wall that no force can break." In 1981, the mass line was expressed as "everything for the masses, everything relying on the masses, coming from the masses, and going to the masses" at the 6th Plenary Session of the 11th Central Committee of CPC. In the new era, Xi Jinping's "people first" philosophy of governance has further enriched and developed the mass line.
How can we gauge the performance or productivity of our transactional teams? They are executing but we are not sure if they can scale with the expected growth of the Company or adapt to the changes in our market. They have asked for more resources but they cannot point to any driver of capacity other than revenue growth. Metrics are like the government: necessary but often useless and, in certain instances, self-serving. They need to be implemented with care and close supervision. Beware of those industry surveys that use “cost as a percentage of revenue” or “employees per function” as a measurement of performance. They provide the illusion of mathematical certainty when, in reality, they are a concoction of firms that do not understand anything other than how to compute an average. Do not buy into their assertions. These firms benchmark the back office headcount costs of your company against other companies with different products, different sales models, and customers in different countries. They compound this error by correlating your cost structure to a supposed cost driver (e.g., revenue) even if the driver is not the actual driver of costs in your organization. To top it off, they compare your costs to other companies without divulging that very few, if any, of these organizations are structured in the same manner as yours. The worst aspect of this quackery is that it is the first – and often last – analysis relied upon by the “internal consulting” teams that executives turn to when they want to evaluate their back office teams’ performance. Why aren’t the purveyors of these surveys aware of their problems? First, they do not want to admit their cash cow business does not add real value. They stick to the fallacy that comparing hundreds of companies must add some benefits to an organization and essentially they cheat you by not telling you how they made their analysis fit your company. Second, they cannot be confronted with the results of their data in rigorous ways since companies that participate must remain anonymous. You cannot dig any deeper to understand if the survey is a true “apples to apples” comparison. Their fundamental data is flawed, and thus their results are worthless! Can you benchmark to measure performance? Yes, but not through a survey. Let us return to “Lean”, which has workers exercise critical thinking to measure their performance. According to this system, the individuals who perform work should continually take the initiative to tinker, experiment, and improve their processes. They are the planners. They do the work. And they need to evaluate their work to make it the most efficient, and thus profitable, for your company. You cannot improve if you do not measure the timeliness, cost, or quality of your work and the satisfaction your product provides to customers. Measurements in these important areas communicate objective feedback about problems. They enable you to understand the root cause of your issues, whether manufacturing, distribution, or day-to-day transactional functions. Further, measurements can tell you if your efforts to improve processes have succeeded or not. The wrong metrics can lead, fatally, to misplaced priorities. How can you overcome the traditional tradeoff notions between cost, time and quality, and implement the right metrics? By prioritizing. “Pursuing time” allows simultaneous improvement along all three dimensions, rather than a requirement that trade-offs must be made by dimension. Efforts that focus on cost usually achieve only temporary savings – a reduced headcount assigned to the same workload results in increased error rate and excessive delays in time. Efforts that focus on error reduction (e.g., SOX) inevitably build up redundancy checks resulting in increases in cost and cycle time. Focus on cycle time reduction targets the improvement effort on anything that causes delays, which are caused by quality problems or constraints. The effort to fix these problems is what drives up costs. In a service context, long-cycle times (a long close, a long payroll cycle, large amounts of time to collect or pay bills, etc.) mask issues related to poor order entry, customer unfriendly purchasing, journal entry errors, inaccurate operating databases, unavailable personnel/systems, rework, etc. Focusing on dramatically reducing cycle time exposes these problems. Once exposed, you can categorize the problems and determine their root causes. Eventually, you can design error detection and prevention into your processes with the goal of achieving zero defects. A relentless focus and pursuit of this goal will dramatically improve the performance and work tempo of your team. Consider the financial close, an orchestrated process that aggregates the results from a number of information streams. It is guaranteed that if you target a four-day close or less, you will discover quality issues that were not previously detected. Measure every late, adjusting and immaterial entry delivered during each day of close. Do the same for reconciliations and reports. Categorize the reasons for each issue every month. Also, track the time that every employee works during close and where they spent their time. This collection does not have to be precise. It just has to get a rough estimate of where work is performed each month. After a few months of tracking these measurements alone, you will discover the root cause of almost every error and constraint in the close. And you will quickly be able to reallocate resources to deliver the right information to the right customers at the right time each month. As you kick off your improvement efforts, reach out to your service providers and employees. Ask them their thoughts on which companies have the best back office performance. Introduce yourself to the leaders of these organizations and ask to trade information on your processes. As long as they are not competitors, they usually will comply with your request. Like you, they want to improve, too. Ask how they measure themselves. Ask which initiatives have succeeded and which have not. Ask how they are organized and why. This type of benchmarking – investigating other companies’ processes in depth while simultaneously investigating your own – yields the best results. It allows workers to analyze “why” they do what they do in detail and “how” they or others can do it better. And it just might prevent another worthless survey from being purchased. - Does your company’s performance evaluations include measure of timeliness, cost, quality or customer satisfaction of your teams? - Do you base headcount or budget allocations on industry survey results? - Do your back office teams benchmark their performance or structure against other organizations?
How to investigate when a robot causes an accident – and why it’s important that we do Robots are featuring more and more in our daily lives. They can be incredibly useful (bionic limbs, robotic lawnmowers, or robots which deliver meals to people in quarantine), or merely entertaining (robotic dogs, dancing toys, and acrobatic drones). Imagination is perhaps the only limit to what robots will be able to do in the future. What happens, though, when robots don’t do what we want them to – or do it in a way that causes harm? For example, what happens if a bionic arm is involved in a driving accident? Robot accidents are becoming a concern for two reasons. First, the increase in the number of robots will naturally see a rise in the number of accidents they’re involved in. Second, we’re getting better at building more complex robots. When a robot is more complex, it’s more difficult to understand why something went wrong. Most robots run on various forms of artificial intelligence (AI). AIs are capable of making human-like decisions (though they may make objectively good or bad ones). These decisions can be any number of things, from identifying an object to interpreting speech. AIs are trained to make these decisions for the robot based on information from vast datasets. The AIs are then tested for accuracy (how well they do what we want them to) before they’re set the task. AIs can be designed in different ways. As an example, consider the robot vacuum. It could be designed so that whenever it bumps off a surface it redirects in a random direction. Conversely, it could be designed to map out its surroundings to find obstacles, cover all surface areas, and return to its charging base. While the first vacuum is taking in input from its sensors, the second is tracking that input into an internal mapping system. In both cases, the AI is taking in information and making a decision around it. The more complex things a robot is capable of, the more types of information it has to interpret. It also may be assessing multiple sources of one type of data, such as, in the case of aural data, a live voice, a radio, and the wind. As robots become more complex and are able to act on a variety of information, it becomes even more important to determine which information the robot acted on, particularly when harm is caused. As with any product, things can and do go wrong with robots. Sometimes this is an internal issue, such as the robot not recognising a voice command. Sometimes it’s external – the robot’s sensor was damaged. And sometimes it can be both, such as the robot not being designed to work on carpets and “tripping”. Robot accident investigations must look at all potential causes. While it may be inconvenient if the robot is damaged when something goes wrong, we are far more concerned when the robot causes harm to, or fails to mitigate harm to, a person. For example, if a bionic arm fails to grasp a hot beverage, knocking it onto the owner; or if a care robot fails to register a distress call when the frail user has fallen. Why is robot accident investigation different to that of human accidents? Notably, robots don’t have motives. We want to know why a robot made the decision it did based on the particular set of inputs that it had. In the example of the bionic arm, was it a miscommunication between the user and the hand? Did the robot confuse multiple signals? Lock unexpectedly? In the example of the person falling over, could the robot not “hear” the call for help over a loud fan? Or did it have trouble interpreting the user’s speech? The black box Robot accident investigation has a key benefit over human accident investigation: there’s potential for a built-in witness. Commercial aeroplanes have a similar witness: the black box, built to withstand plane crashes and provide information as to why the crash happened. This information is incredibly valuable not only in understanding incidents, but in preventing them from happening again. As part of RoboTIPS, a project which focuses on responsible innovation for social robots (robots that interact with people), we have created what we call the ethical black box: an internal record of the robot’s inputs and corresponding actions. The ethical black box is designed for each type of robot it inhabits and is built to record all information that the robot acts on. This can be voice, visual, or even brainwave activity. We are testing the ethical black box on a variety of robots in both laboratory and simulated accident conditions. The aim is that the ethical black box will become standard in robots of all makes and applications. While data recorded by the ethical black box still needs to be interpreted in the case of an accident, having this data in the first instance is crucial in allowing us to investigate. The investigation process offers the chance to ensure that the same errors don’t happen twice. The ethical black box is a way not only to build better robots, but to innovate responsibly in an exciting and dynamic field. But just as any revolution only marks the start of a new era, so too are the robots that brought us to this point only the start of a new phase in the development of intelligent technology. They may have already had a transformative impact on numerous industries, but as demands for more intelligent industrial processes grow, robots themselves are undergoing their own evolution. The thinkers of tomorrow Robots have traditionally been ‘doers’: they perform tasks set by humans, according to strict directions given to them. Their go-to function is repetition. What they will need to develop is the capacity to think independently of their human overseers. In short, robots will need to become ‘thinkers’ – they will need to interpret dynamic data and respond accordingly. In doing so, they will be better able to work alongside human operators in a symbiotic relationship where each focuses on complementary value-added activities. This is necessary for several reasons. Market conditions have evolved as industrial processes in general have grown more sophisticated, and this has changed the nature of competition. In manufacturing, competition over time-to-market for goods has increased dramatically as new technologies have been introduced to the factory floor. This in turn has increased the need to manage and minimise unpredictability in manufacturing processes – to eliminate as much as possible any delays and supply shortages and to perform functions at higher speeds and with heightened accuracy. Introducing to any operation a robot that can think as well as do – in other words, a robot that is equipped with intelligent automation functions – satisfies the dual need for efficiency and reliability. It enables better resource use and increased productivity while simultaneously minimising the likelihood of error, improving quality as a result. Transforming industry, robot by robot Comau has been a global leader in the field of industrial automation for close to half a century. Thanks to its continuous commitment to innovation, research and advanced training, it offers its customers expertise and new technologies in the most progressive sectors, including digital transformation and electrification. The company, which is headquartered in Turin, Italy, understood early on that manufacturing would be transformed were robots to become more than just doers, and it set about developing advanced Industry 4.0-enabled robots that could draw on the latest developments in AI and IoT to enhance their performance. AI streamlines deployment and removes the costs for time spent programming and testing, while IoT features put the robot ‘in touch’ with everything in its environment and predicts where maintenance is needed in advance of obstructions or equipment breakdown. Lasers and AI vision systems can also ‘see’ potential problems up ahead and thereby reduce downtime. The innovations it has introduced are transformative, and their application is far-reaching. Comau’s robots and technologies are present in every industry across the globe as one among only a small handful of companies that have such a wide-ranging offering. Its robots meet two pressing needs of industry in the modern day: variability and customisation. These features are central to the ‘thinking’ robot. Rather than stopping when they encounter an error – say, an engine part in the wrong place on a factory line – the robots are now equipped to look for errors or imperfections themselves and respond immediately, thereby allowing them to continue with their task without delay. And just as they can use the powers of IoT to sense what is around them and predict the actions of nearby machines, so too are those nearby machines now able to do the same of robots. The whole ‘smart’ factory environment is in a state of symbiosis, and this has a profound effect on efficiency and safety. Comau has also made sure that the use of these robots, together with a variety of other cutting-edge technologies, is more democratic than it has been historically. The company’s aim all along has been to provide not just for the giants of industry but also for small- and medium-sized manufacturers, regardless of what they produce. Its solutions, in the form of hardware, software, vision systems, AGVs and more, are all developed in-house and are designed to respond to the unique needs of each customer. It works side by side with the customer throughout the entire process of acquiring, installing and testing its robots. Freedom to think The transition from industrial to intelligent robotics is already transforming industrial processes and is set to do so well into the future. Designed into the robots of today is a sense of freedom and autonomy – they now have a level of intelligence that means they can make their own decisions, free of human interference. This in turn frees up human labour to focus on the jobs robots cannot do. Comau sees the future as one of endless enhancement of robots’ capabilities. They will be faster and lighter; they will be completely mobile and able to go everywhere; they will play a central role in sustainability; and they will work better alongside people. In short, their impact on industry, as well as the work-life quality, will be game-changing. AI is not a switch companies can simply "flip on", and there’s no one-size-fits-all AI plug-in. Despite big investments and seemingly expert advice from knowledgeable vendors, many companies still make mistakes along the way. These mistakes have both tangible and intangible consequences, including loss of sales, unnecessary costs and, perhaps most importantly, a loss of end-user trust. Here are some of the most common mistakes made when deploying AI systems, and how best to avoid them: Insufficient penetration: AI is more complicated to implement correctly than many companies realise at the outset. Designed to be part of a holistic business system, it will offer little benefit if only installed at a surface level. For example, many companies use AI in a chatbot function on their frontend. As these bots typically don’t have access to a company’s core systems, they are no help beyond the most basic of functions and are easily identified as non-human by customers. Without access to the right datasets on the backend, this use of AI will fail to make a meaningful impact on a company’s bottom line. Incompatibility between different AI systems: Even those businesses that have incorporated AI into their core systems still aren’t guaranteed meaningful ROI. A company could be running multiple AI engines at once to support multiple business functions. Problems occur when these engines don’t communicate with each other effectively, or give conflicting results and advice. Inability to go big: Small-scale AI will only offer small-scale returns. The inability to roll out the technology on a large enough scale holds many companies back from reaping the rewards of their investment. Interestingly, it’s often big organisations with unwieldy backends that struggle with this the most. Vendor bias is another reason why many organisations fail to get their money's worth after investing in AI. Companies traditionally outsource the entire job to a single vendor that delivers an end-to-end solution. However, such huge and abrupt system overhauls are costly, slow and very risky. Most pertinently, this approach also leaves the company with no control or autonomy over the systems they come to rely on everyday. Vendors also naturally prioritise their own technologies, meaning that the vast majority of products on the market are excluded, even if they would provide the best solution for clients. In contrast, thanks to a robust AI ecosystem, companies can select best-in-class products that can be implemented in a seamless and modular fashion to meet their unique needs. You can’t just set it and forget it when it comes to AI. Systems should be flexible and adaptable to incorporate the best that today’s rapidly changing market has to offer. “Ultimately, what you really need to understand is that the core of this problem lies in the core of your business, not the technology vendor’s business,” says Wolf Ruzicka, Chairman of EastBanc Technologies, which helps companies customise and better leverage their existing AI systems. “Instead of having this technology bias, you must own up to the fact that you need to own your own technology destiny." Only the company itself can drive a modular custom approach that perfectly complements its unique goals, value proposition and customer needs. But most companies don’t have this skillset within their existing talent pools. That’s where EastBanc Technologies steps in. With more than 20 years of experience, the Washington DC-based team of software engineers puts its clients in the driver’s seat by enabling them to design, build and own their AI systems. Supporting and empowering every step of the way, the EastBanc Technologies team helps companies build modular custom software that quickly unblocks problems and delivers impactful returns. The EastBanc Technologies team starts by identifying a “killer feature” – the unique selling point at the core of the business model that draws the end-user in and evokes emotion. Once the killer feature is identified, an AI module is integrated to enhance that feature. When this first feature is working as it should, other business applications and functions are brought online around the killer feature, progressively cleaning up and connecting data streams throughout the business to the AI systems. Unlike the traditional model, this incremental approach prioritises organic permeation of AI. It is a fast, flexible and low-risk approach that's laser-focused on ROI. “All that companies really have to do is commit to not outsourcing this fundamental addition to their business,” says Ruzicka. “[They can add] components gradually on a very granular level to become AI leaders in their respective spaces.” There is all too often a shortage of IT talent and budget for business leads to manage processes effectively. In the banking domain, a lot of banks’ internal systems are now decades old, problematic to expose APIs and slow to make changes. Operations users in the back office need flexibility to log and track comments on why something is booked the way it is and a tool that allows them to manage their operations without having to take on a headcount of hundreds. Low-code/no-code tools can offer support for multiple file types and visibility of data enrichments without code hidden in the back end. Most importantly, there is less need for a developer to build a reconciliation. Already, this low-code configuration means change management timelines can reduce from months to days – a crucial differentiator when regulations can so rapidly change. To provide regulators with more transparency in real-world banking operations, business users already use some auditable tools, but they are often neither intuitive nor have user-friendly UIs. Other reconciliation tools in use have sophisticated code that does not make sense to business users. This is a key issue, since the way a reconciliation has been set up needs to be explainable to the regulators by the business. Likewise, if the builder leaves the company, there is often no traceability requirements for the reconciliations they have built? From this lens, the frustrations of the business become clear. Key challenges that banking reconciliations have to face are cross-departmental reconciliations taking considerable time and effort, audit gaps due to the use of Excel, and the manual nature of adapting to regulation changes through data mapping. Once the bank has upskilled the business in the relevant tools and is past the experimentation phase, the hard and soft benefits of citizen development are multi-fold. In one scenario, timelines to run a reconciliation were reduced by 50 per cent and there was a substantial reduction in licence costs, with some tools completely decommissioned. Furthermore, the bank adopted a simple and easy-to-use tool for non-technical SMEs, which brought further employee satisfaction and engagement. As organisations have become more confident in digital offerings, citizen development is currently on the slope of enlightenment. But let’s be clear: these are simple and functional tools rather than highly innovative ones. Business operational leads don’t need a tool of mass innovation – they need a tool of mass usability to help them transition to the plateau of productivity that citizen development offers. Relieving CSRs from repetitive manual work with attended automation Oded Karev, General Manager, NICE RPA For contact centre agents, the past couple of years have certainly not been easy. That’s why leading companies are looking at how they can use automation to address the pressures agents have experienced working from home and dealing with massive call volumes from distressed customers seeking reassurance from a human voice. The new pressures put further strain on the stress points that have always existed in call centres: the challenges of keeping staff up to date with ever-changing tech and regulation, meeting higher customer expectations and driving higher efficiencies. These challenges actually lead to lower employee engagement and motivation, while increasing the need for retention as many employees are experiencing burnout. Many agents are frustrated with the technology and processes they use to do their work as well as with the work experience itself. The US Contact Center Decision-Makers Guide 2021 reveals that 81 per cent of contact centre decision-makers agree that multiple copy-and-paste leads to wasted time and errors. Another 68 per cent said that it’s important to reduce after-call work, while 76 per cent agreed that agents find it difficult to learn new systems. Perhaps most concerning of all is the finding that repetitive work remains among the top three reasons for agent attrition. The good news is that contact centre decision-makers are recognising the role that digital technology can play in resolving some of these challenges. Seven in ten, for instance, report that robotic process automation (RPA) can help to reduce average call handling times. Attended automation: the next step forward for call centres The benefits of unattended RPA in the back-office are, of course, well understood now. They reduce costs, enable scalable operations and let people focus on work that requires strategy, creativity and interpersonal skill rather than on repetitive processes. Leading organisations are looking at taking automation a step further by putting robots on the frontline of customer service as enablers for contact centre agents. These desktop robots, or attended automation, can assist agents to perform efficiently and accurately by taking away the need to manually navigate multiple screens and apps. Attended and unattended robots working together The benefits of automation really begin to compound when attended and unattended process bots are blended to scale operations and drive higher efficiencies. For example, an attended bot could automatically populate a form or provide the agent with links to data and real-time next best-action guidance as they help a customer to open a new bank account. Unattended bots could be used to generate an email to the client after the call with the agent is complete, or to generate and categorise technical support tickets on behalf of the service agents. This combination of attended and unattended technology lets people focus on adding value rather than on processes and systems. For automation to be successful and sustainable in a contact centre environment, it needs to enable agents in real time. Today’s sophisticated blend of cognitive, attended and unattended automation solutions delivers this functionality, helping to bridge the gap between employee engagement, customer experience and cost containment. Most call centres want to drive significant and continuous improvement in six ways: Automation enables enterprises to achieve these goals. At NICE, we have a long history in contact centres as well as a strong track record in attended and unattended automation. Our products are built on 20 years of deep understanding of contact centre operations and technology – and we can offer an integrated suite of RPA and contact centre solutions. With NEVA, our personal agent assistant, we have pioneered attended automation. NEVA resides on the agent’s desktop, helping them in real time and in a contextually relevant manner via interactive callout screens. Her unique capabilities ensure real-time process optimisation and automation of desktop tasks, resulting in improved employee and customer experiences. Our ability to span the back-office and the contact centre with a comprehensive, intelligent automation solution is unique in the market. We would welcome an opportunity to discuss how we can help your contact center reconcile the customer experience and employee engagement challenges it faces today. INDUSTRY VIEW FROM NICE 3D printing offers African countries an advantage in manufacturing Thousands of years ago, the blacksmith led a technological leap in sub-Saharan Africa. West Africa’s Nok culture, for example, switched from using stone tools to iron around 1500BC. Imagine an innovative artisan like this re-emerging in the 21st century equipped with digital technologies. This is not Wakanda science fiction. It is the story of a real promise that 3D printing holds for an industrial revolution on the African continent. 3D printing, also known as additive manufacturing, is a fabrication process in which a three-dimensional object is built (printed) by adding layer upon layer of materials to a series of shapes. The material can be metal, alloys, plastics or concrete. The market size of 3D printing was valued at US$13.78 billion in 2020, and is expected to grow at an annual rate of 21 per cent to a value of US$62.79 billion in 2028. Not only is it a different way of physically making objects, 3D printing also changes the picture of who can participate in industry – and succeed. 3D printing is an excellent match for smaller operators because it does not require huge capital investment. It is also the best fit for “newcomers” while established operators are locked in the old manufacturing method. The new technology is a great opportunity for developing countries to leapfrog over developed countries. In a recent paper, we reviewed the evolution of 3D printing technologies, their disruptive impact on traditional supply chains and the global expansion of the 3D printing market. We show that conditions in the African context are favourable for technological leapfrogging, and propose that universities, industries and government can work together to support this, giving small and medium enterprises a key role. We illustrate our argument using South Africa and Kenya as examples. Technological leapfrogging is related to technology lock-in. Lock-in happens when an established technology continues to dominate the market even after the arrival of a new and superior technology. The older technology remains successful not because it’s better but because it got the advantages of an early lead in the market. In developed countries, where the older technology has taken hold, it’s difficult for new, radical technologies to get a start. Too much has already been invested in the old ways. But it’s different in developing countries. Less has been invested in older technologies. And almost everyone is starting from the same point; the cell phone is an example. For a long time, the African continent has lagged behind the rest of the world in manufacturing. A recent report indicates that while Africa is home to 17 per cent of the world’s population, it accounts for only 2 per cent of global manufacturing value added. 3D printing presents an opportunity to revive this sector through technological leapfrogging. African countries meet the four key conditions highlighted by scholars for technological leapfrogging: To take the first condition, the wage cost of an average African country is a small fraction of the wage cost in a developed country. For example, according to the latest estimates, the average annual income in Nigeria is US$2,000, compared with US$64,530 for the United States. 3D printing is initially unproductive because of lower initial rates of adoption. This means a smaller market and limited profit opportunities. Looking at the third condition, 3D printing is not an incremental improvement on what went before, so experience in the old technology does not count for much. On the fourth condition, one of the strongest arguments for 3D printing is that it flips the dominant logic of traditional manufacturing: scale economies. Big multinational manufacturing corporations invest heavily in machinery, logistics and other material and human resources for mass production. They make big profits only if they sell enough units. The more they sell, the bigger their profit margins. 3D printing doesn’t need centralised high-volume production and large inventory stocking. Suddenly, it pays to produce fewer units. There is no need for heavy investment in manufacturing plants, because 3D printers come in various smaller sizes and at lower costs. There is now a growing market for budget and do-it-yourself 3D printers that cost less than US$200. Smaller and more sustainable All this shifts the advantage in favour of micro, small, and medium scale enterprises. The proximity of 3D printing shops to customers is another advantage as it reduces logistics costs and supply chain challenges. Micro and small-scale 3D printing shops can offer work and income opportunities for households. University, industry and government Our study proposes a way for the university, industry and government sectors on the African continent to work together to harness the opportunities offered by 3D printing. These domains – producing knowledge, producing goods and regulating economic relations – have tended to be disconnected. Instead, we argue that greater integration can encourage innovation. We give examples from South Africa and Kenya to illustrate the challenges and opportunities. In South Africa, universities are leading the drive to provide training and retraining programmes for engineers, technologists and other professionals involved in 3D printing. Much more needs to be done to develop new curricula, research and programmes in additive manufacturing. Kenyan universities are at an earlier stage, focusing on convening networking and knowledge exchange events. In the government sector, South Africa has the most detailed policy document of any African country on 3D printing. The country’s 3D printing strategy is being led through the Ministry of Science and Technology, and through agencies such as the Council for Scientific and Industrial Research and Technology Innovation Agency. In the industry sector, South Africa’s Rapid Product Development Association works closely with the government to organise conferences, workshops and community engagement activities. The results so far The South African 3D printing industry has had considerable success in recent years, driven by a growing community of enthusiasts and designers. Small enterprises and startups are making inroads in areas such as 3D printing of cell phone accessories, car accessories, and jewellery. In 2014, South African doctors used 3D-printed titanium bones to perform a jaw-bone transplant surgery, the second in the world. There are also recent applications of 3D printing in housing. The three spheres need to do more work in research investment, policy interventions and strategic public procurement. And they need to cross boundaries. Universities can commercialise and contribute to policies. Industry can invest in research and influence policies. Governments can play in the market and in knowledge production. Michael Goepfarth, CEO, SCIO Automation Despite a surge in interest over the past decade, automation isn’t new – and nor is the concept of the “smart factory”. Manufacturers and designers have been trying to make factories “smarter” since the late 18th century, when new manufacturing processes developed in Europe gave rise to the industrial revolution and brought to the production of goods a level of speed and efficiency that had eluded previous generations of workers. But the automation of factories today of course has a very different feel to that of the Industrial Revolution. By the late 1700s, new machines were appearing on the factory floor, but large workforces were still indispensable to the manufacturing process. There were no computers that could gather data on output, or that could predict where problems in the factory line might occur. There was heavy wastage, both material and financial, and production – despite being much faster than before – was still slow. In the present era, however, innovations in technology stand to drastically reduce waste and enhance efficiency. The “smart factories” currently being imagined and developed are fully digitised spaces where AI and machine learning provide fine-grained data on every aspect of the operation. They have come about in tandem with advancements in logistics, among them smart warehouses and conveyor systems, and autonomous material flow – developments that are essential to the “smart” functioning of factories. The myriad aspects of the manufacturing universe – the factory, the supply chain; even the talent pipeline – are intimately connected through new technologies; indeed, the processes that have improved production have grown together with those that improve logistics, such that the two have a symbiotic relationship. In the factory, floor managers can see in real-time how efficient production is on any given day, where wastage is occurring, and where problems, such as blockages in a machine, might arise. So too is each component of a supply chain – where transportation of goods might be problematic; where supply might be low – subject to far closer scrutiny than previously. In short, the smart factory of today, and the environment it sits within, brings a far more enhanced level of operation than previous manufacturers enjoyed. But not everyone is jumping on board. Manufacturing companies can be unwieldy things, averse to change and lacking the flexibility to take their operations in a new direction. Yet they also know that if they don’t evolve, they will lose out on business, and costs will remain high. SCIO Automation understands that technical innovation should no longer be viewed as merely an advantage – something that will make a business stand out from the crowd – but as key to the survival of companies involved in, or reliant on, manufacturing. It’s not a niche product; rather, it has become central to competitiveness. Without modern automation, businesses will die. SCIO has been tailoring integrated automation needs at all operational and information technology levels to clients for decades, and in the process has become a linchpin of the global transition to Industry 4.0. If there isn’t a solution for its clients currently on the market – for instance, Autonomous Mobile Robot solutions for the demanding and flexible transportation of parts from the warehouse shelf to the production line – SCIO can design one. Central to smart factories is the harmonising of the different logistical and production elements of their operation. Every aspect needs to work together, towards one goal. But companies have struggled with the transition to smart manufacturing in part because they have taken automation on a component-by-component basis. SCIO impresses upon clients the need for the connectedness and symbiosis of logistics and production, and within that, all the “smart” parts – sensors, connected devices, cloud computing, Big Data and more. This is important because improved productivity and efficiency rests on managers being able to see the entire operation in real time, and in one place. Huge advancements have been made in the technologies designed to improve efficiency on the factory floor, as well as along the supply chains that products pass through when they leave the factory. Not every business has the confidence to make the changes necessary to stay competitive, however – these changes are, after all, daunting, and there is risk. But with the right kind of help, those risks can be mitigated. And when manufacturing businesses do begin their march towards Industry 4.0, the long-term benefits of smart factory technology – improved workplace safety, enhanced productivity, minimised waste, lower operating costs – will soon materialise. The personification of the black box Dr Sam Anthony, CTO and Co-Founder, Perceptive Automata When you use traditional AI techniques in a self-driving car, you end up with a vehicle that sees humans as black boxes. These boxes move around, and sometimes you can attach labels to them – this one is tall, this one is small, this one is holding its arm up – but you don’t understand them. Navigating around black boxes is hard. They move in unexpected directions, and if you don’t want to hit them you have to be incredibly cautious, assuming they could move in any direction at any time. In fact, there are many situations where it’s simply impossible to figure out how you can get past a black box if you can’t hit it. It would be better if you could understand them not as black boxes, but as people – people who have ideas and goals and are trying to figure out how to interact with you. A big problem for autonomous driving systems When traditional AI models are trained, what they’re learning is a mapping of a label to an image. You take thousands of images and generate numbers for them. How those numbers get generated, or what those numbers mean, isn’t part of the process, but the AI learns what images tend to go with what numbers. What the AI is capable of doing is predicting what the black box attached to that image should say and do. It’s trying to figure out which image is represented by which black box. That’s it. When you put that system in a car and it sees a pedestrian out in the world, it matches that pedestrian with the black box that fits it best. But it doesn’t really know anything about that pedestrian. It doesn’t have any ability to reason about what’s in that person’s head. Black boxes don’t want to cross the street – black boxes don’t want anything. Black boxes don’t know you’re there. They have no inner life. Solving the problem with research At Perceptive Automata, we remove the black box. When we train AI, we do something different. We still take thousands of images, but instead of opaquely, mindlessly applying labels to them, we integrate the personhood of the labellers deeply into the training process. As each of those thousands of images is shown to the AI, what it’s learning is not a set of disconnected numbers. It’s learning what people think about that image. In particular, it’s learning how people would answer questions about what’s in the head of a pedestrian pictured before them. This isn’t easy to do. To ask people questions about what’s in the minds of pedestrians and get answers that are usable for training AI requires scientific rigour and a great deal of art. You need expertise in visual psychophysics, a field of science dedicated to measuring how people see and respond to the world. You have to be a people expert and know how to understand, study and characterise them. AI trained this way, like Perceptive Automata’s SOMAI, or State of Mind AI, no longer sees pedestrians as black boxes. When you put our AI in a car, it sees pedestrians as people do. Instead of merely identifying what set of numbers maps most accurately to a given pedestrian, SOMAI is able to answer questions about what’s in that pedestrian’s head. It does this by imagining the people who trained it and hearing their voices. In effect, it answers the question: “If there were 500 people here in the car with you, and you asked them all, what would they say about whether that pedestrian wants to cross in front of your car?” When you understand pedestrians as people, who have goals and desires that interact with the goals of the vehicle, they stop being black boxes that might move anywhere at any time. They want specific things. Some of them standing at a crossing want to cross. Others maybe don’t. Without a real-time understanding of what’s in pedestrians’ heads, autonomous vehicles will be stuck trying to navigate around black boxes, and the promise of this industry will not pay off. Maintenance issues can quite accurately make or break business operations. For manufacturers, this is the reason why utilising the ever-growing IoT means they are able to accelerate smart manufacturing and rapidly improve their processes. IoT can help augment a number of different business operations, from customer experience to process maintenance. In manufacturing, artificial intelligence and IoT applications can efficiently deal with various operations, from predictive monitoring and preventative maintenance through to optimising equipment performance, quality control in production and even the much talked about human-to-machine interaction. All of this encompasses a reduction in product cycle time and greater efficiency through reduced downtime. So why is it that adoption of these technologies is slower in the UK? There is an argument that it is the public sector that is causing the backlash of IoT adoption, due to a requirement to provide a strong return on investment. But in the manufacturing industry, the downtime reduction alone is a clear return. One good theory is the idea of lack of infrastructure throughout the UK. IoT adoption required brilliant communication systems, fast speed, reliable networks and, in many cases, 5G connectivity with its boasted minimal latency. However, it doesn’t appear that sufficient investment has been made to support these areas. Throughout the UK, the most significant IoT advances have been made by both the healthcare and military sectors. Through patient monitoring and remote monitoring hospital check-ups, the NHS is becoming a pioneer for this technology. Another barrier to entry sometimes can be the time lag in visualising the return. Many IoT development projects can sometimes take time and involve workforce upskilling and higher initial costs. On average, IoT projects take around two to five years, and for many organisations this is a tough investment to make. However, this can be down to the lack of expertise and confidence among executives and board members who are the decision-makers for these technologies. Often, it is the person on the shop floor who can really feel the benefit from these technologies, but they are not the decision-maker – causing a lack of connection. It is essential that the industry is reassured that using IoT will most certainly improve processes, and the speed at which it is developing is now even reducing the security vulnerability factors. Another concern from many businesses surrounds privacy and security constraints. To ensure more projects deliver the benefits they are looking for, UK organisations need to plan ahead to address current and future challenges. Connectivity must be meticulously considered from the get-go to ensure that issues along the way don’t impede the ultimate rollout. It is essential that upper management and executives understand the impact of high device volumes from a cost and resource perspective, ensuring greater visibility across security, maintenance and performance monitoring. They must seek to create a more efficient working environment for their workforce through these technologies in order to successfully realise a return on investment. Of course, along with infrastructure, an investment in IoT security will become increasingly critical and must also be understood to be as reliable as older security systems. Overall, the UK is continuing to build momentum around the IoT. Yet there is still much to do to progress to faster, wider, large-scale adoption. If organisations are able to understand that maximising the value of IoT-generated data will help future-proof projects, growth can be unparalleled. Industry 4.0 technologies won’t necessarily solve any of these issues for you outright, but they will enable you to hone in on and quantify solutions to those things you can directly inspire, inform and influence. Harvesting, analysing and acting on the right data in real time offers increased speed and ability to address your pain points within the business and lies at the very heart of Industry 4.0. Why should I even spend time thinking about all this? Fundamentally, there are two reasons. First, reduced costs. Your operating costs should fall and your available time should rise as a result of using the right digital tools within your business. Second, that you should stay ahead. It’s likely that many of your competitors, collaborators and clients may well be exploring or increasing their use of digital technologies within their businesses. Stay in the game, get yourself up to speed and avoid getting left behind by innovating before they do. Where do I start? Set your sights high but start with a grounded view. Don’t spend money on “digital”, if you haven’t already optimised your “physical”. The adage remains: get lean, then get digital. You need to find out what’s really happening within your manufacturing operation, or as we say, create a single version of the truth. To do this you will need to digitally connect your existing machines and information systems across the business. This used to be the privilege of big businesses that could afford expensive bespoke programmes to connect their systems. The new digital tools bring such connectivity between systems such as ERP (enterprise resource planning) and CRM (customer relationship management) within the grasp of any SME. To complete this task, it’s likely that you will need to add some simple and relatively inexpensive sensors to your existing machines (at the cost of a few pounds) and some new connecting protocols to your network. To do this and make sense of the data generated, you may need to get help. Challenge your new apprentices or latest recruits to work with your champion on this. Failing that, try contacting your local further education college, university engineering department, equipment supplier or catapult centre. Having gained a better understanding of the key factors at play within the business, you’ll be in a much better position to shine the spotlight on those parts of your operation which require deeper examination, and that will give you savings and increased flexibility. It’s vital to act on these insights of your operation and reap the rewards before moving forward to the more advanced steps where you will need to invest your hard-earned cash on further technology. As anyone who has ever been through a new ERP or control system implementation knows, there is no point at all in digitising poor productivity (at best) or digitising chaos (at worst). Creating new gains Industry 4.0 is all about taking your existing human capital, shop floor equipment and back office systems and connecting these valuable assets, giving you a clearer and faster view of your world, and enabling your team to save money and time, invest your savings in the right technology at the right time with clear return on investment, and spend more time with your existing and new customers to grow your business. The climate crisis: ensuring positive change John Riggs, SVP Applied Technology Solutions, HSB In April 1865, two weeks after Abraham Lincoln was assassinated, an overcrowded paddle-steamer exploded on the Mississippi river just outside Memphis. Its overworked, badly patched steam boilers were to blame. More than 1,500 people – mainly Union prisoners being transported home from the US Civil War – lost their lives. To this day, the disaster that befell the SS Sultana remains the worst in US maritime history. It wasn’t a one-off: steam, which had become an indispensable element of the Industrial Revolution, and the central power source for a wide variety of functions, including transportation, manufacturing and heating, was dangerous and unpredictable. Steam boilers had a tendency to explode, and when they did the results were catastrophic. A group of engineers in Hartford, Connecticut took up the challenge: how do you make a safer steam boiler? After much strenuous experimentation, they landed on a set of technological innovations they could back up with guarantees. HSB (Hartford Steam Boiler) was born. This, of course, provides a too-close-for-comfort metaphor for our current predicament. We are all passengers on the SS Sultana, except this time around it isn’t a steam-paddler – it’s our planet. And there aren’t a bunch of steam boilers at risk of exploding. Rather, there are myriad deeply intertwined natural systems that are overheating, breaking down or on the verge of blowing up. The evidence is abundant wherever we look. What can we, as insurers, do about this? The immediate and somewhat obvious response is to offer more robust insurance, so that when the climate-related flood, fire or windstorm inflicts catastrophic damage on your property, we’ll help to make you whole again. This sort of response is all well and good, but only addresses the consequences of climate change. We’re much more interested in helping to slow, even reverse, climate change. There are two areas where we can make a difference: The value we bring relies on a number of closely related elements: Every project is different. Sometimes we’re dealing with large inventories of legacy machinery; others we approach with inventive business models enabled by the enormous progress in sensoring and monitoring techniques. Let’s be straight: these are enormous challenges, orders of magnitude greater than perhaps anything we’ve ever faced. After two centuries of spewing massive volumes of carbon and other noxious substances into the atmosphere, basing our entire industrial infrastructure – and the wealth it generates – on the burning of the fossil fuels, we are talking about nothing less than the total overhaul of the world’s technological, industrial and agricultural operating systems. How much will it cost? According to Goldman Sachs, somewhere around a staggering $100 trillion over the next 20 to 30 years. We feel fortunate in having an established track record of applying latest-generation technology to thorny problems, and backing these solutions with increasingly sophisticated guarantees that promote their adoption and reassure our customers. That’s what we, as insurers, are supposed to do, right? But we don’t have all the answers – far from it. And we can’t do this alone. As a player in a highly competitive arena, we routinely keep things close to the vest. We never show our hand. But given the nature of this set of challenges, let us say clearly and unequivocally: we hope and trust that our peers and competitors are working on this as hard as we are. We hope that they are inspiring their workforces to apply their best thinking to come up with solutions that work. There is an enormous amount to be solved, a tsunami of work to be done. Those who succeed will be amply rewarded, both financially and – more importantly – in the knowledge that they’re helping ensure the survival of future generations. We’re in this to win. For all of us. INDUSTRY VIEW FROM HSB Supercharging customer experience for financial services with Voice AI technology Kun Wu, Co-founder and Managing Director, AI Rudder Customer experience will never go back to how it was pre-Covid. The pandemic has accelerated digital adoption and changed consumption habits, from on-demand video streaming, e-commerce and food delivery to more digitalised services such as digital banking and payments. Almost everything is available at consumers’ fingertips as 24/7 availability becomes the new normal. The rise of a hyper-connected and hyper-convenient world has also led to an exponential rise in call volumes at customer service centres across geographies and industries. As a result, businesses struggle to keep pace as pandemic-weary consumers run out of patience and want both speed and flexibility in their digital interactions. While some of the world’s biggest brands have taken innovative approaches to meet this new standard of customer service, responding to new customer expectations would mean a radical departure from the status quo. Faster, more intelligent on-demand customer support In banking and financial services, customer support functions are oversubscribed. However, things would look very different if your top call centre representatives could work 24/7 without fail. AI Rudder can make this vision a reality through advanced Voice AI technology. Our Voice AI uses automatic speech recognition (ASR) and natural language understanding (NLU) to process human conversations. Our machines can receive and interpret customer intent in voice communications. Not only that, they can respond and communicate on a near-human-seeming level of intelligence. From payment reminders to debt collection to quality assurance, Voice AI assistants can take every repetitive task off your customer support’s workload, thus freeing them up to focus on more complex conversations that require a human touch. Bringing the human touch with AI-powered voice automation Adopting AI does not need to come at the expense of human relationships. While customers may feel sceptical about automated solutions using AI, the technology helps bridge fundamental gaps in delivering exceptional CX. A Voice AI solution can help companies extract precious data from incoming customer calls and existing recorded conversations. This data can provide banking and financial services companies with valuable insights that would otherwise be overlooked, and which can even help forecast peak demand periods based on identified patterns. Insights from AI-decoded conversations can also lead to product innovations and service optimisation, making your company stand out in a saturated market. Customer experience teams can even use real-time data from Voice AI to predict issues before they occur, such as a sophisticated phishing scam targeted at customers. By alerting your customers of potential threats, you’re building greater confidence that their assets are safe in your hands. Besides improving customer experience, businesses can use AI-driven insights from voice conversations to identify gaps in knowledge across client-facing teams. This precious information can feed into training programs, helping you build a top-class customer service department. Working with banks and financial services companies around the world Founded in 2019, AI Rudder develops advanced voice AI technology to help businesses solve B2C communication challenges. We work across various industries, including banking and finance, fintech, insurance and e-commerce. More than 200 companies around the world use our platform today to augment their human agents with our AI voice assistants, maximising profits, efficiency, and scalability. With our Voice AI solution, businesses can increase the scale, speed and quality of their customer experience while reducing operational costs. Implementation won’t be an issue because AI Rudder’s platform also has an open API that makes integration and deployment fast, easy and seamless. With businesses overwhelmed by customer requests during Covid-19, Voice AI has proven to be a natural choice – especially in financial services and banking – to provide the agility needed to deal with the business impact of the pandemic. INDUSTRY VIEW FROM AI RUDDER Using AI in agriculture could boost global food security – but we need to anticipate the risks As the global population has expanded over time, agricultural modernisation has been humanity’s prevailing approach to staving off famine. A variety of mechanical and chemical innovations delivered during the 1950s and 1960s represented the third agricultural revolution. The adoption of pesticides, fertilisers and high-yield crop breeds, among other measures, transformed agriculture and ensured a secure food supply for many millions of people over several decades. Concurrently, modern agriculture has emerged as a culprit of global warming, responsible for one-third of greenhouse gas emissions, namely carbon dioxide and methane. Meanwhile, inflation on the price of food is reaching an all-time high, while malnutrition is rising dramatically. Today, an estimated two billion people are afflicted by food insecurity (where having access to safe, sufficient and nutrient-rich food isn’t guaranteed). Some 690 million people are undernourished. The third agricultural revolution may have run its course. And as we search for innovation to usher in a fourth agricultural revolution with urgency, all eyes are on artificial intelligence (AI). AI, which has advanced rapidly over the past two decades, encompasses a broad range of technologies capable of performing human-like cognitive processes, such as reasoning. It’s trained to make these decisions based on information from vast amounts of data. Using AI in agriculture In assisting humans in fields and factories, AI may process, synthesise and analyse large amounts of data steadily and ceaselessly. It can outperform humans in detecting and diagnosing anomalies, such as plant diseases, and making predictions including about yield and weather. Across several agricultural tasks, AI may relieve growers from labour entirely, automating tilling (preparing the soil), planting, fertilising, monitoring and harvesting. Algorithms already regulate drip-irrigation grids, command fleets of topsoil-monitoring robots, and supervise weed-detecting rovers, self-driving tractors and combine harvesters. A fascination with the prospects of AI creates incentives to delegate it with further agency and autonomy. This technology is hailed as the way to revolutionise agriculture. The World Economic Forum, an international nonprofit promoting public-private partnerships, has set AI and AI-powered agricultural robots (called “agbots”) at the forefront of the fourth agricultural revolution. But in deploying AI swiftly and widely, we may increase agricultural productivity at the expense of safety. In our recent paper published in Nature Machine Intelligence, we have considered the risks that could come with rolling out these advanced and autonomous technologies in agriculture. From hackers to accidents First, given these technologies are connected to the internet, criminals may try to hack them. Disrupting certain types of agbots would cause hefty damages. In the US alone, soil erosion costs US$44 billion (£33.6 billion) annually. This has been a growing driver of the demand for precision agriculture, including swarm robotics, that can help farms to manage and lessen its effects. But these swarms of topsoil-monitoring robots rely on interconnected computer networks and hence are vulnerable to cyber-sabotage and shutdown. Similarly, tampering with weed-detecting rovers would let weeds loose at a considerable cost. We might also see interference with sprayers, autonomous drones or robotic harvesters, any of which could cripple cropping operations. Beyond the farm gate, with increasing digitisation and automation, entire agrifood supply chains are susceptible to malicious cyber-attacks. At least 40 malware and ransomware attacks targeting food manufacturers, processors and packagers were registered in the US in 2021. The most notable was the US$11 million ransomware attack against the world’s largest meatpacker, JBS. Then there are accidental risks. Before a rover is sent into the field, it’s instructed by its human operator to sense certain parameters and detect particular anomalies, such as plant pests. It disregards, whether by its own mechanical limitations or by command, all other factors. The same applies to wireless sensor networks deployed in farms, designed to notice and act on particular parameters, for example, soil nitrogen content. By imprudent design, these autonomous systems might prioritise short-term crop productivity over long-term ecological integrity. To increase yields, they might apply excessive herbicides, pesticides and fertilisers to fields, which could have harmful effects on soil and waterways. Rovers and sensor networks may also malfunction, as machines occasionally do, sending commands based on erroneous data to sprayers and agrochemical dispensers. And there’s the possibility we could see human error in programming the machines. Agriculture is too vital a domain for us to allow hasty deployment of potent but insufficiently supervised and often experimental technologies. If we do, the result may be that they intensify harvests but undermine ecosystems. As we emphasise in our paper, the most effective method to treat risks is prediction and prevention. We should be careful in how we design AI for agricultural use and should involve experts from different fields in the process. For example, applied ecologists could advise on possible unintended environmental consequences of agricultural AI, such as nutrient exhaustion of topsoil, or excessive use of nitrogen and phosphorus fertilisers. Also, hardware and software prototypes should be carefully tested in supervised environments (called “digital sandboxes”) before they are deployed more widely. In these spaces, ethical hackers, also known as white hackers, could look for vulnerabilities in safety and security. This precautionary approach may slightly slow down the diffusion of AI. Yet it should ensure that those machines that graduate the sandbox are sufficiently sensitive, safe and secure. Half a billion farms, global food security and a fourth agricultural revolution hang in the balance.
Holiday celebrations call for a celebratory drink, and nothing fits the bill quite like a cup of rich, creamy eggnog with or without a splash of bourbon, brandy, or rum. But as with many holiday treats, eggnog—traditionally made with eggs, cream, milk, and sugar—is loaded with calories, fat, and added sugars. And there’s an additional health concern with eggnog: If it’s made with raw eggs, it can be a food-poisoning risk. Does that mean you should take a pass on this holiday cup of cheer? Not necessarily. You should just remember to indulge in moderation, and check out these nutrition and safety facts before you raise your glass. A Serving Is Smaller Than You Think Usually, the serving size for a drink is 1 cup (8 fluid ounces). But for eggnog, the serving size on the nutrition facts panel is for just a half-cup. If you drink more than that, remember to double (or triple) the figures for calories, fat, and added sugars you see on the carton. The nutritional content of different brands varies, but not by much. In general, the regular dairy versions contain 170 to 190 calories, 9 grams of fat, and 11 to 14 grams of added sugars. Adding an ounce (a little less than a shot glass) of rum, brandy, or other type of spirits tacks on 65 calories. ‘Light’ Eggnog Isn’t So Light According to Consumer Reports, when you’re scanning the selections of pre-made eggnog at a store, you’ll see several takes on the traditional recipe. Those labeled “low fat” or “light” typically contain about 140 calories and less than 4 grams of fat per half-cup serving. But the added sugars content is similar to or only slightly lower than regular eggnog. For example, Hood’s Golden Eggnog has 180 calories, 9 grams fat, and 16 grams of added sugars. Its Light Eggnog has 140 calories and 4 grams fat, but the same amount of added sugars. The dairy eggnogs with the least added sugars are regular Organic Valley Eggnog and Trader Joe’s Lite Eggnog, both containing 11 grams per half-cup. Vegan Eggnogs Can Be Healthier Holiday eggnog made from nut or soy milk will give you the flavor of the season, and it tends to be lower in calories and saturated fat because it doesn’t contain cream, eggs, or milk. They are usually lower in added sugars than dairy versions. Califia Almond Holiday Nog and Trader Joe’s Almond Nog contain just 50 calories, 1.5 grams of fat, and 8 grams of added sugars. You Could Consider Making Your Own, But that May not be a Great Idea Homemade eggnogs can be even higher in calories, fat, and sugars than commercial versions. Using a traditional eggnog recipe spiked with bourbon or rum, a half-cup serving may contain as much as 265 calories, 17 grams of fat, and 18 grams of sugars. But you can lighten up a recipe by substituting half and half for heavy cream and using about half the sugar called for. One advantage of making your own is that you can avoid processed ingredients, such as artificial and natural flavors, artificial colors, and thickeners such as gums or carrageenan. (“Natural flavors” must come from a natural source but can be highly processed with chemicals and include many ingredients that don’t have to be disclosed.) Most eggnogs have at least one of these, but Organic Valley Eggnog is a dairy option that contains only gellan gum but no artificial or natural flavors. It Doesn’t Take Much to Make Eggnog Safer Classic eggnog recipes call for raw eggs. Eggnog made with raw, unpasteurized eggs can contain salmonella, a leading cause of food poisoning. The bacteria can make anyone sick, but young children, older adults, pregnant women, and anyone with a weakened immune system are particularly vulnerable. You can ensure that you and your guests are sipping safely, though, because. almost all of the eggnog sold in stores is pasteurized, which kills bacteria, but be sure to check that the carton or bottle is clearly labeled as such. If you make your own, use pasteurized liquid eggs, which are sold in a carton. Or heat raw eggs (mix them with milk and stir constantly) to 160° F to kill any salmonella bacteria that may be present before adding them to your recipe. Don’t count on alcohol to kill the bacteria because the concentration isn’t high enough to reduce the risk of illness.
Pumas in the Tietê River Basin Pumas in the Tietê River Basin- The use of a top predator as a tool for biodiversity conservation in the State of São Paulo. The project aims to obtain information on puma ecology in a highly fragmented environment. This information is not only important to support species conservation, but also serves as baseline to evaluate environmental health of the areas under AES Tietê influence. The information collected by the project will allow management decisions concerning the puma and contribute to biological integrity of the remaining fragmented natural vegetation in the State of São Paulo. The project began in November 2013 with familiarization and recognition of the study area, covering roads, cart-roads and trails. We also interviewed local community about sightings and traces of pumas. Next, our team installed camera traps in strategic places for local fauna gathering. After 12 consecutive months of photographic gathering, the project evolved to capturing and marking phase. The pumas were placed with radio collars equipped with GPS/ Satellite so pattern movements and adopted diets could be analyzed. Work on the project is in full swing… Soon, we will provide more detailed information!
“Please, do not point your weapons at the sky […] I am not afraid, I am not a coward, I would do everything for my country; but don’t talk so much about atomic rockets, because a terrible thing is happening: I haven’t kissed much”, wrote the poet Carilda Oliver Labra in 1962, when the Missile Crisis turned Cuba into one of the most vulnerable areas for the outbreak of a nuclear conflagration. Of course, preferring verses to slogans was not well seen, especially at a time when the whole world seemed to be divided into two antagonistic and belligerent blocs. Today that situation repeats itself. The world is broken and polarized, forcing one and the other to take sides. The diversity of voices is less and less visible while the public discussion simplifies complex issues to the point of becoming a debate about unconditional support or rejection – also unconditional. “Whoever is not with me, he is against me.” That is the password that resounds everywhere, on both sides. In such a polarized scene, it is not surprising that those who do not identify with these extreme discourses – and sometimes also extremists – have chosen to remain silent to avoid confrontation. Talking about peace, serenity and equidistance is simply frowned upon again. An ancient fable reveals the importance of equidistance and serenity to solve problems An old fable tells the story of a man who had only one very valuable possession: a ring that he had inherited from his father. One day, he stopped at the riverbank to cool off, but he slipped on a stone and fell into the water. The poor man got a big scare, but when he stood up he found that he had lost his valuable ring. He immediately became very nervous. He had to find that ring at any cost. He began to scratch the sand at the bottom with his hands, going around in circles. But the more he toiled, the more the water became muddy from the sand. The man could not find the ring and tried even harder to find it, stirring up the river bed. A Buddhist monk who had seen everything from a distance asked him to stop, but the man couldn’t hear him. He was too nervous and frustrated. He could only think of his loss and disgust. Anger was building inside of him. Then the monk came to his side, touched his shoulder and said, “Stop, calm down!” The man calmed down and walked out of the river. In a few minutes, when the sand settled to the bottom and the water cleared, he could make out the glow of his ring. Then he serenely retrieved it and went on his way. This ancient parable shows us the value of serenity and the importance of being able to “get out” of problems to adopt a better perspective that helps us solve them. In fact, in Psychology, when a person has a problem that torments him or a conflict that he needs to resolve, he is helped to adopt a psychological distance. That distance serves to calm the emotions that do not allow him to see more clearly what is happening. It serves to dissipate frustration and anger, giving way to a more balanced vision that allows you to make the best possible decision. The reviled value of the equidistance Equidistance. It is said of the equality of distance between two points, beings or elements. From the Latin aequus, which means “equal” and distantis, which means “distance”, it not only means to be at a certain distance from two points but also to assume a privileged position to analyze these two positions. Equidistance implies being able to dominate passions in a conflictive moment so as not to blindly believe in either of the two positions, often antagonistic and seemingly irreconcilable, which present themselves as the only possible options at a moment in which we feel trapped, since either emotionally or morally. Many times that equidistance is confused with disinterest, cowardice or inability to commit. In reality, it is quite the opposite, it is an exercise in maturity and self-determination. Equidistance is committing to freedom of decision. It is to stand firm against the attacks of one or the other side. It is not being manipulated. Do not fall into the temptation of thinking that there is a summum bonum fighting with a summum malum. Equidistance is what allows us to connect with our deepest values and listen to our inner compass to decide which way to go when the world becomes too chaotic. It is what prevents us from becoming soldiers who fight on one side or another, blindly convinced that they possess the TRUTH. It is, ultimately, what helps us form our own opinion and go beyond polarization. In fact, polarization, without middle ground, only leads to confrontation and, unfortunately, this is usually resolved by imposing one option over the other, erasing everything that does not coincide, silencing divergent opinions, canceling the diverse culture, simplifying human richness. Therefore, each call for unconditional positioning reduces the possibility of constructive criticism, dialogue and, ultimately, agreement. On the other hand, equidistance is what favors harmony and sincere dialogue, that which arises from a more balanced vision of the world in which there are neither good nor bad, but only interests and needs that must be put in common. It is what allows us to unite positions without falling into extreme value judgments. It is what allows us to open ourselves to complexity and accept the other, with his virtues and defects, as the other accepts us, with our virtues and defects. And perhaps, precisely because of all these virtues, equidistance is once again so reviled. Because in troubled times equidistant individuals are not sought, but militants.
3 Ways Plastic Cribbing Can Increase Efficiency and Output Humans have found creative ways to support and move things much larger than ourselves since the dawn of time. It’s how ancient civilizations built massive, elaborate structures without modern technological advances. But even with modern advances, manufacturers are sometimes best served with older, simpler solutions. Cribbing is a good example of this. But to maximize its benefits, it may be beneficial to add a modern spin on it by making the boards out of plastic. There are numerous ways plastic cribbing can increase efficiency and output for a variety of industrial practices. Warning About Overloading Few things can set back a construction or manufacturing operation like broken equipment. And one potential cause of broken equipment comes from cribbing breaking due to overloading. When one uses traditional wooden box cribs, those working have to use their best guess when determining if the crib is overloaded and whether the timbers may break without warning. Plastic cribbing is designed with a safe failure mode. If operators place too heavy of weight on plastic cribbing, the sides bulge, warning operators that the equipment is overloaded. This allows those working in the area to adjust to avoid costly accidents. Generally, people consider wood to be the more durable option when compared to plastic. However, this fails to recognize how environmental factors affect wood. Wood is likely to be affected by rot or burrowing insects like termites. This can lead to the cribbing breaking down and needing to be replaced, which takes away from a business’s overall efficiency. Plastic doesn’t attract termites that can potentially compromise a job site, and they are largely impervious to the elements. This ensures a long-lasting system that will carry you through your entire task without fail. As mentioned, wood is prone to be damaged by the surrounding environment. Because of that, those using it often are compelled to perform a variety of maintenance tasks to avoid rot or insect damage, such as treating the wood with chemical sealant. This can be time-consuming for companies, especially if they need to reapply the sealant later. On the other hand, plastic requires no additional treatment to be weatherproof, saving companies an extra step. Additionally, the waterproof nature of plastic makes it quick and easy to clean and does not stain. There are many ways that plastic cribbing can increase efficiency and output in your facility or worksite. Check out Tangent Material’s plastic cribbing today to see how it can transform your workplace.
Rotational exercises for mobilizing the spine, mainly targeting the lumbar region of the spine. Important for athletic performance, health and fitness! Rotational strength is mainly created with your obliques, abs and hips, and it's the key to athletic performance. Whether it's movement, sports or martial arts, your trunk is generating lots of force behind the sport-specific movements like striking, hitting or moving in general. If you play sports or do Movement 20XX, your core could already be nicely developed but it's still beneficial to make sure you have the most strength and mobility in the rotation. Everyone wants to move better and hit harder right? Working on your rotation is especially important if you are a gym-goer or do just calisthenics. When was the last time you did rotational drills? A great portion of people have BARELY done any. The rotational drills of the video will effectively stretch and mobilize the spine so your body can rotate better. You can do the drills to warm-up, as a workout finisher or as a workout on their own. It's good to mix different drills together (or do them all) because the emphasis of those drills can be very different. For example, you want to do rotational drills where your upper body is only moving and drills where the lower body is only moving (i.e. scorpion drill). Another thing to take into account is to emphasize different parts of the rotation: are you rotating with your upper body (thoracic), core (lumbar) or hips (legs and lumbar spine). It's sometimes better to focus on moving only your core, because your hips can overpower the movement and do all of the real work of the rotation. It's common to see people who have tons of strength in their hips, but not much of strength in their obliques (rotation of the core). Regardless of your goals and aspirations, these rotational mobility exercises will help you become a stronger and a healthier version of yourself. Train hard, stay safe.
Pope Francis Video: religious persecution is ‘unacceptable, inhuman’ POPE FRANCIS HAS DEVOTED THE JANUARY EDITION OF HIS MONTHLY VIDEO—and with it the prayer intention he entrusts to the entire Catholic Church—to a call and prayer for an end to violations of religious freedom. This month, the Pope Video is sponsored by Aid to the Church in Need (ACN). In his video message, the Pope asks: “How is it possible that many religious minorities currently suffer discrimination or persecution? How can we allow that in this society, which is so civilized, there are people who are persecuted simply because they publicly profess their faith?” “Not only is it unacceptable; it’s inhuman, it’s insane,” the Pontiff charges, insisting that religious freedom “is not limited to freedom of worship … Rather, it makes us appreciate others in their differences and recognize them as true brothers and sisters.” “As human beings, we have so many things in common that we can live alongside each other, welcoming our differences with the joy of being brothers and sisters,” says Pope Francis, adding: “Let us choose the path of fraternity, because either we are brothers and sisters, or we all lose.” The Pope called on entire Church to “pray that those who suffer discrimination and suffer religious persecution, may find in the societies in which they live the rights and dignity that comes from being brothers and sisters.” According to the Religious Freedom in the World report published by ACN in April 2021, religious freedom is violated in a third of the countries around the world, home to close to 5.2 billion people. The same document reports that more than 646 million Christians live in countries where religious freedom is not respected. This topic merits attention declares Thomas Heine-Geldern, executive president of ACN: “Although it is impossible to know the exact number, our research indicates that two-thirds of the world’s population lives in countries where violations of religious freedom occur in one way or another. Surprising? No, this situation has been growing for centuries from the roots of intolerance, through discrimination, to persecution. We firmly believe that the right to be free to practice or not practice any religion is a fundamental human right that is directly related to the dignity of every individual. “It may sound obvious, but even when human rights are on everyone’s lips, religious freedom often leads a shadowy existence. But this right is the starting point for our entire mission. How could we defend the rights of the Christian community if we did not advocate universal law first? Religion is manipulated again and again to spark war. We at ACN are confronted with it every day. Defending the right to religious freedom is key to debunking these conflicts. The religious communities play a central role when ‘nothing works’ politically or diplomatically in war and crisis regions of the world. The world should be aware that the prospects for peaceful coexistence will be bleak unless freedom of religion or belief is respected as a fundamental human right based on the human dignity of every individual.” The Pope reminds us that religious freedom is tied to the concept of fraternity. In order to begin walking the paths of fraternity upon which Francis has been insisting for years, it’s imperative that we not only respect others, our neighbors, but that we genuinely value them, as the Pope says, ‘in their differences and recognize them as true brothers and sisters.’” “For the Holy Father, ‘as human beings, we have so many things in common that we can live alongside each other, welcoming our differences with the joy of being brothers and sisters.’ Without granting this premise, it is impossible to undertake the path towards peace and living side by side with each other,” concludes Dr. Heine-Geldern.
Scaling throughput and performance are critical design topics for all distributed databases, and sharding is usually a part of the solution. However, a design that increases throughput does not always help with performance and vice versa. Even when a design supports both, scaling them up and down at the same time is not always easy. This post will describe these two types of scaling for both query and ingest workloads, and discuss sharding techniques that make them elastic. Before we dive into the database world, let us first walk through an example of elastic throughput and performance scaling from daily life. Scaling effects in a fast food restaurant Nancy is opening a fast food restaurant and laying out the scenarios to optimize her operational costs on different days of the week. Figure 1 illustrates her business on a quiet day. For the restaurant to be open, there are two lines which must remain open: drive-thru and walk-in. Each requires one employee to cover. On average, each person needs six minutes to process an order, and the two employees should be able to cover the restaurant’s expected throughput of 20 customers per hour. Let’s assume that an order can be processed in parallel by at most two people, one making drinks and the other making food. Nancy’s employees are trained to go and help with the other line if their line is empty. Doubling up on a single line reduces the order processing time to three minutes and helps keep the throughput steady when customers enter the lines at various intervals. Figure 2 shows a busier day with around 50% more customers. Adding an employee should cover the 50% increase in throughput. Nancy requests her team to be flexible: - If only one customer comes to a line at a time, one person should run between two lines to help reduce the processing time so they will be available to help new customers immediately. - If a few customers walk in at the same time, employees should open a new line to help at least two walk-in customers at the same time because Nancy knows walk-in customers tend to be happier when their orders are taken immediately but very tolerant with the six minute processing. To smoothly handle the busiest days of the year, which draw some 80 customers per hour, Nancy builds a total of four counters: one drive-thru and three walk-ins, as shown in Figure 3. Since adding a third person to help with an order won’t help reduce the order time, she plans to staff up to two employees per counter. A few days a year, when the town holds a big event and closes the street (making the drive-thru inaccessible), Nancy accepts her max throughput will be 60 customers per hour. Nancy’s order handling strategy elastically scales customer throughput (i.e., scales as needed) while also applying flexibility to make order processing time (i.e., performance) faster. Important points to notice: - The max performance scaling factor (max number of employees to help with one order) is two. Nancy cannot change this factor if she wants to stick with the same food offerings. - The max throughput is 80 customers per hour due to the max number of counters being four. Nancy could change this factor if she has room to add more counters to her restaurant. Scaling effects in a sharding database system Similar to the operation at a fast food restaurant, a database system should be built to support elastic scaling of throughput and performance for both query and ingest workloads. - Query throughput scaling: the ability to scale up and down the number of queries executed in a defined amount of time such as a second or a minute. - Query performance scaling: the ability to make a query run faster or slower. - Elastic scaling: the ability to scale throughput or performance up and down easily based on traffic or other needs. Let’s assume our sales data is stored in an accessible storage location such as a local disk or a remote disk or a cloud. Three teams in the company, Reporting, Marketing, and Sales, want to query this data frequently. Our first setup, illustrated in Figure 4, is to have one query node to receive all queries from all three teams, read the data, and return the query results. At first this setup works well but when more and more queries are added, the wait time to get results back becomes quite large. Worse, many times the queries get lost due to timeouts. To deal with the increasing query throughput requests, a new setup shown in Figure 5 provides four query nodes. Each of these nodes works independently for our different business purposes: one for the Reporting team, one for the Marketing team, one for the Sales team focusing on small customers, and one for the Sales team focusing on large customers. The new setup catches up well with the high volume of throughput and no queries get lost. However, for some time-sensitive queries that the teams need to react to immediately, waiting several minutes to get the result back is not good enough. To solve this problem, the data is split equally into four shards, where each shard contains data of 12 or 13 states, as shown in Figure 6. Because the Reporting team runs the most latency sensitive queries, a query cluster of four nodes is built for them to perform queries four times faster. The Marketing team is still happy with its single-node setup, so data from all shards is directed to that one node. The Sales team does not deal with time-sensitive queries, but as this team grows larger, the number of query requests keep increasing. Therefore, the Sales team should take advantage of performance scaling to improve throughput and avoid reaching max throughput in the near future. This is done by replacing two independent query nodes with two independent query clusters, one with four nodes and the other two nodes, based on their respective growth. During times of the year when the Reporting team does not need to handle time-sensitive queries, two query nodes of its cluster are temporarily removed to save resources, as shown in Figure 7. Similarly, when the Sales team does not need to handle high throughput workloads, it temporarily removes one of its clusters and directs all queries to the remaining one. The teams are happy with their elastic scaling setup. The current setup allows all teams to scale throughput up and down easily, by adding or removing query clusters. However, the Reporting team notices that its query performance does not improve beyond the limit factor of four query nodes; scaling query nodes beyond that limit doesn’t help. Thus we can say that the Reporting team’s query throughput scaling is fully elastic, but its query performance scaling is only elastic to the scale factor of four. The only way the Reporting team can scale query performance further is to split data into more and smaller shards, which is not trivial. We’ll discuss this next. - Ingest throughput scaling: the ability to scale up and down the amount of ingested data in a defined amount of time such as a second or a minute. - Ingest performance scaling: the ability to increase or decrease the speed of ingesting a set of data into the system. In order to have four shards of sales data as described above, the ingest data must be sharded at load time. Figure 8 illustrates an ingest node that takes all ingest requests, shards them accordingly, handles pre-ingest work, and then saves the data to the right shard. However, when the ingest data increases, one ingest node no longer catches up with the requests and ingest data gets lost. Thus a new setup shown in Figure 9 is built to add more ingest nodes, each handling data for a different set of write requests to support higher ingest throughput. Even though the new setup handles a higher ingest volume of throughput and no data gets lost, the increasing demand of lower ingest latency makes the teams think they need to change the setup further. The ingest nodes that need lower ingest latency are converted into ingest clusters, shown in Figure 10. Here each cluster includes a shard node that is responsible for sharding the coming data and additional ingest nodes. Each ingest node is responsible for processing pre-ingest work for its assigned shards and sending the data to the right shard storage. The performance of Ingest Cluster 2 is twice that of Ingest Node 1, as the latency is now around half of the previous setup. Ingest Cluster 3 is around four times as fast as Ingest Node 1. During times of the year when the latency is not critical, a couple of nodes are temporarily removed from Ingest Cluster 3 to save resources. When ingest throughput is minimal, Ingest Cluster 2 and Ingest Cluster 3 are even shut down and all write requests are directed to Ingest Node 1 for ingesting. As with their query workloads, the Reporting, Marketing, and Sales teams are very happy with the elastic scaling setup for their ingest workloads. However, they notice that even though ingest throughput scales up and down easily by adding and removing ingest clusters, when Ingest Cluster 3 has reached its scale factor of four, adding more ingest nodes to its cluster doesn’t improve performance. Thus we can say that its ingest throughput scaling is fully elastic, but its ingest performance scaling is only elastic to the scale factor of four. Preparing for future elasticity As demonstrated in the examples, the query and ingest throughput scaling of the setups in Figure 6 and Figure 10 are fully elastic, but their performance scaling is only elastic to the scale factor of four. To support a higher performance scaling factor, the data should be split into smaller shards, e.g., one shard per state. However, when we go with a smaller scale factor, many shards must be mapped to one query node in the query cluster. Similarly, one ingest node must handle the data of many shards. A limitation of performance scaling is that increasing the scale factor (i.e., splitting data into smaller shards) does not mean the system will scale as expected due to the overhead or limitations of each use case—as we saw in Nancy’s fast food restaurant, where the max performance scaling factor was two employees per order. The elastic throughput and performance scalings described in this post are just examples to help us understand their role in a database system. The real designs to support them are a lot more complicated and need to consider more factors.
10 Things You Didn’t Know About Vaginas by Joe Martino Updated: 19 September 2015 It’s something every woman has, and that’s been crucial in getting people to read this – a vagina. Sometimes not all that easy to talk about because our culture laughs and giggles about private parts (unfortunately), but today we thought we’d share some cool facts about the vagina in this article while raising awareness about cervical cancer, as it’s important we know the early warning signs so we can stay healthy. The infographic below was created by the people over at HB Health of Knightsbridge for your viewing and learning pleasure. To learn more about cervical cancer you can read below or click here. But for now, it’s time to find out the 10 things you didn’t know about vaginas! 1. Abnormal Vaginal Bleeding Abnormal vaginal bleeding between periods, after intercourse, or after menopause. Although this could be due to other medical conditions, it is a telltale sign of possible cervical cancer. If you are experiencing abnormal bleeding between periods or after sexual intercourse then you definitely want to contact your doctor. 2. Unusual Vaginal Discharge If you are experiencing abnormal vaginal discharge, it could be a result of bacterial vaginosis, menopause symptoms, or yeast infection. It could also be harmless, but significant changes in discharge are still worth getting checked out. If the smell is very foul, becomes a more common occurrence, or is brownish, heavy, pale, or blood-tinged, it could be a sign of cervical cancer. It could also be a sign of various other conditions. Again, if this is happening contact your doctor right away. 3. Discomfort While Urinating Pain during urination can be a sign of cervical cancer, this is a symptom that usually occurs when the cancer has already spread to the bladder. In most cases, however, this type of pain is a sign of something far less serious, like a urinary tract infection. 4. Pain During Sex Discomfort during sexual intercourse could also be another sign. Again, just to reiterate, many of these symptoms signify a far less worrisome issue, but you never know, especially if you are experiencing multiple symptoms at once. Pain during sex can be a late onset symptom of cervical cancer, and could indicate that the cancer has spread throughout the reproductive organs and tissues. 5. Heavier & Longer Menstrual Periods Abnormal and heavier menstrual periods are another sign of cervical cancer. Irritation of the cervix, possibly due to cervical cancer, can also occur. 6. Loss of Bladder Control Bladder control is a big issue when it comes to cervical cancer, and it’s one area of the body where cervical cancer commonly spreads. People with cervical cancer often experience loss of bladder control as well as a hint of blood discharge during urination. 7. Body Pain A common symptom of cervical cancer is body pains, more specifically, pain in the leg, back, and/or pelvis. Women with cervical cancer often experience swelling of the legs, because the cancer spreads and obstructs blood flow. It can get to the point where basic simple movements are difficult to do. It’s common for women who are experiencing these symptoms as a result of cervical cancer to have prolonged pain which increases as time goes on. 8. Constant Fatigue Constant fatigue could be also be a sign, especially if it is in conjunction with some of the other symptoms mentioned in this article. When there is disease in your body, it will work hard to do its best to try and fight it off. Your body then becomes tired as a result of these various biological processes. 9. Unexplained Weight Loss The body produces small proteins called cytokines, which break down fat at a much higher rate than normal. This leads to weight loss, irrespective of your diet, when you are fighting disease. As with many other cancers, the same applies here.
How Wireless Pressure Transducers Used in Gas Piping System The gas piping system is an important part of the urban infrastructure while wireless intelligent products, such as wireless pressure transducers are used in its safety monitoring system of gas pipeline network equipment considering the significance of normal and safe supply of gas. Why Using Wireless Pressure Transducers in Gas Piping System? The reason why we use wireless pressure transducers in urban gas piping system lies firstly, compared with other pipelines, gas pipelines have particularly strict requirements, because once the pipeline has problems, it may cause fires, explosions, poisoning and other accidents. The wireless pressure meter is a high-tech smart wireless product that can be used to monitor pipeline pressure. When there is a problem with the gas pressure at a monitoring point, we can quickly find it. Since the medium to be measured is natural gas, which is flammable and explosive, the explosion-proof wireless pressure transducer meets all the strict requirements which might challenge ordinary pressure gauges. What We Benefit from Wireless Pressure Transducers? Wireless pressure transducer can display data of pressure, temperature, signal, power, etc in real time and on-site. It has functions such as real-time curve and historical curve query, historical list query, data download and printing, and supports equipment list management, map management, device grouping, device name customization and other functions. Moreover, all the above operations can be implemented remotely. We can view all the data from mobile or PC terminals, which brings great convenience to the monitoring and maintenance personnel. What Else Can Wireless Pressure Transducer Do? In addition to use in gas pipelines, wireless pressure transducers can also be used in water, petroleum and chemical pipeline,with a wide range of applications, accurate, safe and reliable.
While earnings inequality remained virtually unchanged in urban India between 2004-05 and 2011-12, it declined sharply in rural India over this period. This column finds that although the change in the distribution of education among paid workers had an inequality-increasing effect, there was a net decline in rural inequality because returns to increased levels of education improved more for low-earning workers than high-earning ones. In their discussion of India’s economic growth, Kotwal, Ramaswami and Wadhwa (2011) point to the existence of two Indias: "One of educated managers and engineers who have been able to take advantage of the opportunities made available through globalization and the other—a huge mass of undereducated people who are making a living in low productivity jobs in the informal sector—the largest of which is still agriculture." This column is about the second India that mainly resides in its rural parts. Agriculture is the mainstay of the rural economy in India and it continues to employ the largest share of the Indian workforce. However, its contribution to gross value added (GVA) is much smaller. In 2011, the employment shares of agriculture, industry, and services were 49%, 24% and 27%, respectively, whereas their shares in GVA were 19%, 33% and 48%, respectively. In addition, between 2004-05 and 2011-12, real gross domestic product (GDP) in these sectors grew at 4.2%, 8.5% and 9.6% per annum, respectively, making agriculture the slowest growing sector of the economy. Given these figures, the concern about whether high overall GDP growth has benefitted those at the bottom, and to what extent they have benefitted compared to those at the top, is extremely pertinent for rural India. Earnings inequality in India We use two rounds of the nationally representative ‘Employment Unemployment Surveys’ (EUS) conducted by the National Sample Survey Organisation (NSSO) for the years 2004-05 and 2011-12 (Khanna et al. 2016). Our target population is wage earners between the ages of 15 and 64 (working age) living in rural areas of India. In both years, wage earners constituted a quarter of the rural working-age population and represented about 104 million paid workers in 2004-05, and 118 million in 2011-12. Over the seven-year period, we find that the earnings distribution shifted to the right and became less dispersed. The average real weekly earnings increased from Rs. 391 to about Rs. 604, while median increased from Rs. 263 to 457. For 2004-05, the all-India official rural poverty line was Rs. 447 per capita per month. Thus, the average (median) real monthly earnings was 3.5 (2.4) times the poverty line, and in 2011-12 it was 5.4 (4.1) times this value. Figure 1. Real weekly earnings, by percentile, 2004-05 and 2011-12 Figure 1 plots the real weekly earnings at each percentile1for 2004-05 and 2011-12. At each percentile, earnings were higher in 2011-12 than in 2004-05. The gap between the two curves reveals that the increase in earnings was, in absolute terms, greater for higher percentiles. For instance, real weekly earnings increased by Rs. 99 at the first decile, Rs. 194 at the median, and Rs. 307 at the ninth decile. However, as seen in Figure 2, the percentage increase in earnings was greater at the lower end of the distribution. For instance, earnings increased by 91% at the first decile, 74% at the median, and by 44% at the ninth decile. Thus, earnings inequality - defined in relative rather than absolute terms - declined over the seven-year period. Figure 2. Change in log of real weekly earnings, by percentile, 2004-05 to 2011-12 The decrease in inequality is also reflected in the Gini coefficients2. The Gini coefficient of real weekly earnings fell from 0.462 to 0.396. This is in sharp contrast to the picture in urban India where earnings inequality remained virtually unchanged over the period: The Gini coefficient of real weekly earnings in urban India was 0.506 in 2004-05 and 0.499 in 2011-12. Decomposition of the change in earnings Figure 3 shows the results of the decomposition of the change in the (log) real earnings distribution at different vigintiles (vigintiles refer to the 19 points that divide the wage earners into 20 groups of equal size in ascending order of earnings). A decomposition exercise essentially divides the observed change in earnings over the seven-year period into two parts: a composition effect and a structure effect. This is done by constructing a hypothetical earnings distribution that combines worker characteristics (such as the share of male workers, and the shares of workers with different levels of education)3as observed in 2004-05, with the rates of return (essentially how the labour market rewards males versus females, and how it rewards illiterates, high schoolers and college graduates) as observed in 2011-12. Consequently, the difference between the 2004-05 distribution and the hypothetical distribution gives us the structure effect, that is, the part arising due to changes in the rates of return keeping the distribution of characteristics fixed at 2004-05 levels; while the difference between the hypothetical distribution and the 2011-12 distribution gives the composition effect, that is, the part due to changes in the distribution of worker characteristics keeping the rates of return fixed at 2011-12 rates. In Figure 3, the dashed line representing the structure effect closely follows the bold total change line, is in the positive domain and is downward sloping. From this, one can conclude that most of the decline in inequality occurred because the returns improved a lot more for low earners (typically low skilled, women, illiterates) than for high earners (typically high skilled, men, college graduates). In fact, it is clear that while changing characteristics did lead to an improvement in real earnings throughout the distribution, it had an inequality-increasing effect (the dotted line is in the positive domain but is upward sloping at higher percentiles). Thus, if the ‘wage structure’ had been held constant over the period, earnings inequality would have risen due to the change in worker characteristics. Detailed decomposition of the composition effect reveals that the inequality-increasing effect was mainly driven by changes in the distribution of education among paid workers: over the seven-year period the share of illiterates decreased from 45% to 35.6%, while shares of all other levels of education, ranging from primary to college and beyond, increased. On the other hand, the change in the industrial composition, mainly arising from a shift from agriculture to construction, led to decreased earnings inequality. Detailed decomposition of the structure effect reveals that the inequality-decreasing effect was driven by lower returns to higher levels of education for workers at the top end of the earnings distribution in 2011-12 compared to 2004-05. Conclusions and policy implications For wage earners who constituted about a quarter of the rural working-age population, we find that real earnings increased at all percentiles. Using consumption expenditure data that span the entire population, other studies (Kotwal et al. 2011) have also documented an improvement in all parts of the distribution. Taken together, there is clear evidence that economic growth in India in the post-reform period (after the early 1990s) has been accompanied by a reduction in poverty. Our analysis also reveals that while the rural Gini fell over this period, it remained virtually unchanged in urban India. This suggests that the dynamics of earnings is different for the two sectors. This could be because the underlying structural characteristics are different across the two sectors. For example, while agriculture is the largest employer in rural India, for urban India it is services. It could also be the result of different redistributive policies followed in the two sectors. These aspects need to be recognised when designing future policies to tackle inequality in the two regions. One cannot be certain that this trend of declining earnings inequality will continue into the future. Regardless of the underlying causes of the recent decline in earnings inequality in rural India, volatility in global crop prices, and the drought conditions recently experienced by large parts of the country because of two consecutive weak monsoons are important reminders that policies designed to foster employment opportunities and wage growth of unskilled workers outside of agriculture are crucial for improving the economic well-being of the rural workforce in India.
Do colleges use block scheduling? A few American colleges practice block scheduling, such as Colorado College and Cornell College in Iowa, so the practice is not something that has not been tried and tested. In a block-scheduling situation, a student takes, and a faculty member teaches, one course at a time during a three- to four-week period. What is a block in university? Block subjects is a model of teaching students one subject at a time over two to four weeks, rather than several subjects at a time over ten to 13 weeks in a semester. Is block scheduling better than regular? While it’s hard to say one way is better than another, some studies have found that block scheduling is ineffective. Some research has shown that students in the block schedule format have scored lower in science, biology, physics, and chemistry. Does block scheduling improve student achievement? They concluded that block scheduling does not have a positive effect on academic achievement. What does block scheduling mean? A block schedule is a system for scheduling the middle- or high-school day, typically by replacing a more traditional schedule of six or seven 40–50 minute daily periods with longer class periods that meet fewer times each day and week. How do you do block scheduling? 7 tips to start time blocking your schedule today - Identify what you need to work on for the day. - Figure out when you’re most productive. - Group meetings if possible. - Schedule your time blocks. - Block off personal time. - Allow for unexpected interruptions or work. - Plan for lost time. - Adjust as needed. What are the benefits of block scheduling? With block scheduling, students and teachers are able to focus on fewer subjects, and to explore them in greater depth. Both teachers and students assert that this exploration allows them to become engrossed in the subject matter rather than moving rapidly through material. Why is block schedule better than traditional? Block scheduling, which reorganizes the school day into longer periods for in-depth learning, allows students to see groups of teachers on alternating days, according to research by the National Education Agency. By contrast, students on a traditional schedule attend all classes every day. What colleges have block schedule? A block lasts for three and a half weeks,beginning on a Monday and ending on the following fourth Wednesday. How do schools use block scheduling? Look at the big picture. Should schools have block schedule? INCREASING INSTRUCTION TIME. The current high school A/B block schedule results in infrequent student/teacher contact,which negatively impacts students’ academic,social and emotional development. – Hilary: January–March – Easter: April–May – Trinity: June–July – Michaelmas: October–December
What is the patella? The patella, also known as the kneecap, is a small bone located in the front of the knee. The patella connects the quadriceps muscles to the patellar tendon and, ultimately, the shinbone (tibia). The quadriceps muscles are some of the strongest muscles in the body, allowing you to walk, run, and jump. As these muscles contract and relax, the patella glides back and forth through a groove in the knee, called the trochlea. What is patellar instability? Occasionally, the patella can slide or pop out of the groove (trochlea) that it normally glides in. This can result in a partial or full dislocation, which is usually very painful. When a patient has one or more of these dislocations, it is called patellar instability or an unstable kneecap. What are the causes of patellar instability? Patellar instability can occur due to a direct blow to the knee (e.g. a football player’s helmet hitting the kneecap) or due to abnormal positioning of the knee that may occur in sports, dance, or a fall. Usually, patients with patellar instability have specific risk factors that predispose them to a dislocation, including abnormal alignment of the legs or a shallow groove. Symptoms of patellar instability: Patients who experience a patellar dislocation will usually have severe pain in the front of the knee. Their knee will usually swell and it will be difficult for them to bend the knee without discomfort. The kneecap may or may not go back to its normal position within the groove after dislocating – sometimes the patient or a healthcare provider must put it back in place. What structures are injured in patellar instability? Normally, the kneecap is held in place within the groove by a band of tissue called the medial patellofemoral ligament (MPFL). This tissue acts as a rope or checkrein to prevent the kneecap from moving out of the groove. When the kneecap (patella) dislocates, the MPFL is damaged in almost all cases. Another part of the knee that can be injured when the patella dislocates is the cartilage. Cartilage is a smooth material that helps all of our joints to glide and bend easily, including the kneecap (patella) and its groove (trochlea). When the patella dislocates, these cartilage surfaces can be damaged, causing a piece of cartilage or a piece of bone and cartilage, to break off. When a piece of cartilage and/or bone breaks off, it is called a “loose body” and can float around the knee. A “loose body” can cause the knee to lock or catch and often needs to be removed. If I dislocate my patella, will it happen again? In general, a patient who has had a prior patellar dislocation is at a higher risk of having another; and a patient who has had multiple patellar dislocations is at an even higher risk. Patients who are younger and have shallow grooves are at the highest risk of repeat dislocations. The history and physical examination often provide evidence that a patellar dislocation has occurred. Your doctor will usually get x-rays of the knee to view the position of the kneecap, as well as the depth of the groove. When a patellar dislocation occurs, an MRI is also usually ordered. The MRI helps to look to for evidence of damaged structures (the MPFL or the cartilage) and to see if there is a “loose body” within the knee. Occasionally, a CT scan may be needed to evaluate the bony structures of the knee. When a patellar dislocation occurs, the first step is to make sure the patella is back in place within the groove. This often happens on its own, but occasionally requires the patient, a friend, or a healthcare provider to put it back in place (called a reduction). When a patient presents to clinic after a recent dislocation and/or reduction, multiple treatment options exist. The treatment path that your surgeon recommends will depend on the results of your x-rays and MRI, as well as the number of dislocations you have had in the past. Conservative Treatment Options: Conservative treatment is recommended for most patients after their first dislocation who do not have a “loose body” within the knee. Conservative treatment consists of: - Rest: avoiding athletic or contact activities for a period of time. This will allow the swelling and inflammation in the knee to subside. - Ice: this will help to decrease swelling and inflammation, leading to less pain. - Elevation: this will help to decrease swelling and pain in the knee. - Brace or sleeve: a brace or sleeve may help to alleviate swelling and keep the kneecap in place. - Physical therapy: to help strengthen the quadriceps and other muscles in the legs and core to prevent future dislocations. Surgical treatment is indicated in patients with a history of multiple dislocations, in patients with a “loose body”, or rarely in patients after their first dislocation who are at high risk for another. The goal of surgery is to prevent future dislocations. There are several different surgical options for patellar instability that may be recommended depending on a patient’s specific situation. These include: - Arthroscopy: a minimally invasive technique in which small incisions are made to insert a camera and small instruments into the knee joint. These instruments can be used to remove loose bodies and view the cartilage surfaces of the knee. - Open surgery: open surgery through incisions is often required to treat patellar instability. Your doctor will talk to you about the type of surgery recommended, but it may include reconstruction of the damaged ligament (MPFL) or shifting a part of the shinbone (tibial tubercle) that the patella and quadriceps tendon are connected to. Rarely, the cartilage is badly damaged and a procedure is required to repair it. These surgeries are all performed on an outpatient basis, meaning you will go home the same day. A numbing medication is typically injected during surgery and a prescription for pain medication will be given to help with pain after surgery. Typically, you will use a brace after surgery that keeps your knee straight and the amount of weight you put on your leg (weightbearing) may be restricted. The type of surgery you have will determine how long you need a brace or to keep the weight off your leg. A physical therapy program, specific to your surgery, will also be provided to guide you through restoring motion, followed by strength and function to your knee. The total recovery time depends on your specific surgery, but usually a minimum of 4-6 months after surgery are required. Risk and Complications: Complications from patellar stabilization surgery are relatively rare but depend on the type of surgery your surgeon recommends. These risks may include bleeding, infection, blood clots, prominent screws, damage to nerves or blood vessels, persistent pain, or the need for further surgery. At Reno Orthopedic Center, your sports medicine surgeons are experienced in the treatment of patellar instability. As a ROC patient, we will provide you with the best and most up to date techniques available.
Equine Natural Therapy:Complementary treatments for lameness in horses. Equine Natural Therapy concentrates on the tendon injuries and joint diseases which are the major causes of lameness in horses and the natural therapies such as acupuncture, chiropractic care, equine spa hydrotherapy and massage which are most relevant to their treatment.The focus is entirely on natural therapies which have been adopted into the mainstream by many veterinary practitioners as complementary to, and in some cases, the preferred alternative to conventional treatments. The more esoteric alternatives such as magnetic therapy have been omitted. As well as remedies for lameness, there is a significant collection of articles from around the world on the structure of equine joints together with the main causes and diagnosis of common joint and tendon problems to which horses are particularly prone. It should be noted that all treatment of horses (or any animal for that matter) in the UK must be carried out under the supervision of a Veterinary Surgeon. The prescription of drugs or any surgery must not be carried out by unqualified individuals. Acupuncture has used effectively for the treatment of ailments in humans for thousands of years but it is only now becoming accepted, particularly in the US, as a suitable treatment for horses. Chiropractic care has it supporters and detractors but some horse owners and vets are confident that positive results have been achieved. The use of sea water as a preventative as well as curative medium for horses has been common practice for centuries. Where access to the sea is not an option the hosing down of horses legs with cold water is used worldwide in the treatment and prevention of lameness. In the last 10 years, however, there has been significant progress and research into the use of cold water hydrotherapy, or cryotherapy, in the treatment of equine lameness. Australia and the US have been at the forefront of this development. Massage is another field which is used extensively in the treatment and training of human athletes. The benefits of equine massage has gradually gained credence and equine massage specialists are now available in many parts of the country. How Hydrotherapy Works
Meet the Glaucus atlanticus, a tiny sea slug that’s only 3 cm long, floating on the surface of an ocean near you. Like many of these uncommon creatures, this one has a lot of names which include a sea swallow, blue angel, blue glaucus, blue dragon, blue sea slug and blue ocean slug. Despite its colorful appearance, this isn’t a creature you’d like to come across as it’s loaded with poison due to its unique diet. It feeds on the venomous Portuguese man o’ war, a creature often mistaken for a jellyfish which also goes by the unsettling name of the “floating terror.” This small invertebrate eats hydrozoans from the man o’ war, which are fatal to humans, but easily consumed by the Glaucus atlanticus. It has hard disks inside its body that act as protective barriers, secreting a special mucus. After enjoying a meal, the Glaucus atlanticus stores the poison inside its body for the future in order to defend itself. Some scientists think that if this little sea slug has a voracious appetite, it can end up becoming far more dangerous than even the Portuguese man o’ war. The Glaucus Atlanticus is neither male nor female, but a hermaphrodite with both reproductive organs. After mating, each slug produces eggs, which they lay on driftwood, or on the skeletal remains of their enemies. When food is in short supply, things get a little crazy. When one slug meets another in the neighborhood, rather than wave hello, they’ll simply begin to eat the other head first. Yep, turns out this little pint-sized blue dragon is also a poison-sucking cannibal. Too bad, really, as they are such a lovely shade of blue.
What is ashwagandha? Ashwagandha, also known as withania somnifera or winter cherry, is an evergreen shrub which grows in India, the Middle East and parts of North Africa. Ashwagandha is commonly used in Ayurvedic medicine. Ayurvedic medicine is one of the world’s oldest medical traditions, and aims to treat common ailments through plant medicines, diet, exercise and lifestyle changes. Ayurvedic medicine is still commonly practiced in India, though is still considered a traditional medicine practice rather than a practice of modern scientific medicine. Although Ayurvedic medicine is considered pseudoscientific in the western world, some Ayurvedic treatments have been shown to have health benefits when used as supplements. Examples include brahmi, an effective anti-inflammatory, and turmeric, which improves blood flow. Does ashwagandha work for erectile dysfunction? Ashwagandha is claimed to have many benefits in Ayurvedic medicine, some of which include improved penile health, improved symptoms of erectile dysfunction, and sometimes even penis growth. Many men use ashwagandha as a low-cost alternative to classic erectile dysfunction medicine such as Viagra, usually due to medical reasons excluding them from taking Viagra. Below are peer reviewed studies investigating whether ashwagandha is an effective erectile dysfunction treatment. There are currently no peer reviewed studies of the effect of ashwagandha on vasculogenic erectile dysfunction, which accounts for the vast majority of cases, as the herb is usually claimed to provide decreases in cortisol and increases in testosterone, which would only improve dysfunction caused by anxiety or testosterone deficiency. A 2002 study investigated the impact of daily supplementation of ashwagandha in healthy male rats over a period of seven days. The study found that the rats actually displayed increased instances of erectile dysfunction as well as lower sex drive and less sex overall. The study concluded that ashwagandha may be detrimental to male sexual competence. Psychogenic erectile dysfunction A 2011 study gave 86 men with psychogenic ED (ED caused by anxiety around sex) either ashwagandha or a placebo over a period of 60 days. There was no significant improvement in erectile dysfunction in men who took ashwagandha over placebo, suggesting it is an ineffective treatment for psychogenic erectile dysfunction. A 2014 study repeated this study to similar results. A 2019 study found a 14% increase in testosterone levels over placebo in a sample of 43 men aged between 40-70, all of whom were overweight. Another 2010 study found a significant increase in testosterone in both a group of 75 fertile and 75 infertile men, as well as increase in luteinizing hormone, which stimulates the production of testosterone. Many other studies however, have found no significant increase in testosterone over placebo, and some have even found significant decreases. It seems then that although there is limited research on the topic, ashwagandha does not seem to have a therapeutic effect on the symptoms of psychogenic erectile dysfunction. Does ashwagandha have any health benefits? Although not shown to be an effective treatment for erectile dysfunction, there are other health benefits of ashwagandha which are better supported. These include: - Reduced stress and anxiety - Athletic performance - Reduction of blood sugar levels - Reduced inflammation - Improved cognitive function - Improved sleep Is ashwagandha safe to take? Ashwagandha has been used by humans for centuries, so its side effects are well understood. Side effects typically include drowsiness, gastrointestinal upset and diarrhea, though most people report no side effects. Pregnant women, or people using benzodiazepines, anticonvulsants, or barbiturates should not take ashwagandha. Ashwagandha is not considered a medicine in the UK, and thus is not subject to the same checks and regulations as most medicines are, meaning that you cannot be as sure that the ingredients are what they are claimed to be, or have high purity. A team of researchers in the US tested 114 people using ayurvedic medicines and found that 40% of them had some level of lead poisoning. If you are buying ashwagandha, make sure that it is from a reputable seller, such as a high street shop.
Blogs can be an exceptionally reliable methods of advertising your business and also website. Not only can a blog drive website traffic to your website, yet it can additionally aid you build credibility with your target market. By providing content that your target audience is interested in, you develop yourself as a skilled professional and come to be an authority in your area. Consumers usually acquire product and services from reputable resources, and also blogs offer you the possibility to demonstrate your expertise. Blogs are an efficient means to connect with new people, build brand-new relationships, and much better comprehend others. They additionally enable you to connect with people from various histories, jobs, and also hobbies. You can read about different subjects, connect with viewers, and also discover brand-new services and products. Finally, blog sites offer the chance to share your opinions and share your ideas in a meaningful means. If you’re thinking of starting a blog site, right here are some reasons to consider it: A blog is a discussion. It aims to mirror and develop an area. A blog owner might be confidential, and they might select to reveal their identification. However, it is essential to note that several people utilize blogs for personal functions. For instance, a blog writer may upload articles regarding his life. Some people discuss national politics, or share their preferred images and video clips. Some individuals also post personal pictures on their blogs. It’s simple to see just how blog writing can be helpful for a company. A blog site can be a personal or professional journal. It can be utilized as a kind of social networking. It is a conversational activity as well as looks for to reflect the viewpoints of the neighborhood, as well as can be used as a tool to share details. As the term implies, it is a kind of on the internet interaction. A blog can be an individual or service blog site, and also it can even be made use of as a social networking service. So, just what is a blog site? One typical use a blog site is for organization purposes. Services can market their services and products on a blog site. Not-for-profit organizations can additionally make use of a blog to engage with their consumers. There are several various other advantages to blogging, consisting of structure relationships as well as recognizing others. As long as you’re passionate regarding the topic, a blog will certainly attract a wide range of visitors. If you’re writing for individual objectives, it’s a wonderful way to share your experiences and also your thoughts. Unlike many forms of web site web content, blog sites can be updated often and are usually made use of by entrepreneur as a way of seo. Because internet search engine algorithms are constantly changing, they place newer content greater than old, out-of-date material. This implies that your website is more likely to get more web traffic, and also if you’re writing consistently, your site visitors will be more probable to return. The benefits of blogging make it an excellent device for way of living entrepreneurs. Despite the many benefits of blogging, it’s worth noting that it has been around for over two decades. Essentially, it’s an on-line journal that enables you to share your ideas with others. Some people utilize blogging to keep their inner thoughts exclusive, while others just wish to share them with buddies. Regardless of the function of your blog site, it’s important to note that blogs can be utilized for personal or business functions. Some of these sites will let you compose by yourself. Blogging is an on-line conversational platform that allows you to upload material consistently. It is a terrific method to market a service. It can also assist develop new relationships. You can connect with people from around the world through blog sites, and discover what other people are doing. Having an on-line existence makes your firm stick out from the competitors. If you’re a company, blog writing is an essential technique for getting more consumers. Once you have your blog site, you can share it with others. Before the rise of blogging, electronic neighborhoods had actually currently existed for a long time. These communities consisted of very early variations of web-based bulletin board system and also usenet. These were basically open accessibility journals that were created by people. The very first true blogs started showing up in 1994, as well as several early pioneers integrated it right into their lives. In the years complying with, the principle of blogs began to be preferred in the public ball. The concept of blog sites is a natural expansion of discussion and an efficient seo device. The benefits of blogging are numerous. It can help a business rank greater in internet search engine, develop a skilled status in a particular area, and also increase web traffic to its website. It can also be utilized to spread knowledge. There are several various other reasons for a company to create a blog site. For example, a blog could assist a company share details, and also it can help develop a sense of community. Besides being useful for company, it can additionally help a brand’s track record with consumers. Blog writing can be an efficient advertising and marketing device. It can help companies increase their exposure. Services that offer info to the general public can raise their profits. It may additionally make their consumers really feel better. In addition to raising brand name awareness, blogging can be an efficient advertising and marketing device. If your organization can offer valuable material, you can attract clients. This will help you gain even more cash. It will additionally make you stand apart in search engine rankings. The even more content you have, the even more possibilities you have to make sales. Aside from being a marketing device, blog writing is likewise an important way to engage with other individuals. You can satisfy brand-new people, share your views, and also learn more about their lives. You can discover their passions as well as discover their pastimes. You can likewise share your experiences as well as pointers. There are no restrictions when it comes to blogging! You can likewise use your blog site to advertise your service. A properly designed blog can enhance your revenues. Nonetheless, you require to be familiar with the dangers that come with post. “You can find out more “ In addition to being an efficient marketing tool, blogging can additionally profit your organization. While the majority of blogs are insightful, they also provide important information to the visitor. A blog can be used to market items as well as interact with customers. Nonprofit organizations can likewise utilize it to maintain their interior communication going and track development. Include pictures, video clips, and also infographics in your web content to make it much more enticing to visitors. If you can not obtain sufficient viewers, you can always attempt contracting out the writing job.
Life In a Refugee Camp Editor’s note: We were asked not to identify most people we talked to at the camp and we have respected that. The Kofinou refugee camp seemed to exist apart from the rest of the world. Secluded from most civilization, the camp contains refugees from Africa and the Middle East. The road there was long and exhausting—what was a two and a half hour ride felt like five hours—and I couldn’t wait to finally get there. Even as I walked up to the camp’s white gate, the doors remained closed. A security guard met me on the opposite side and asked what I was here for. I told him I was a student journalist, here to visit the camp.The gates began to slide open. I was directed to the main office, where it was mandatory for all visitors to check in. Shuffled among four different workers, a lady finally told me to wait for her outside. I signed in on the visitors list and was given a blue lanyard to wear around my neck to signify my status as a visitor. After discussing my purpose with the authorities, we argued over my camera equipment. I was previously told by Petros Heracleous, a member of Kisa, an organization that deals with migration and asylum seekers, that it would be no problem to take my equipment in with me. Petros said the camp was open to anyone. The authorities finally decided that I was not allowed to enter the camp with my camera or any other equipment. I thought about it for a few seconds. I felt worthless without my camera. I knew I couldn’t tell the story of the refugees properly without it. I handed over my book bag, but not before sneaking my phone out and sliding it into my back pocket. I left my camera behind. Moving forward, I began to observe my surroundings. Box-like rooms connected each other in long rows that reached to the end of the camp. Each room was painted in a different pastel color. There were three pathways of cement littered with dirt and debris connecting back to where the security guard stood. I noticed a couple of kids playing by themselves in the middle of the pathways, some rode scooters and bikes. Walking into the camp, I was quickly approached by three refugee women who spoke only Arabic. I was able to communicate with them because my parents are Syrian immigrants and I grew up in an Arabic speaking household. After introducing myself in Arabic, they opened up to me about the conditions of the camp. It’d been days since the women have received any dish soap or other cleaning supplies. Things we take for granted are a big concern for those living in the Kofinou refugee camp. They looked exhausted, but their smiles pushed past the tiredness on their faces. I wanted to document this, but no one wanted to have their photograph taken. “We don’t want the authorities to kick us out if they hear us speaking bad about the camp,” one said in Arabic. The others agreed—they were willing to talk, but without the risk of having their name or face attached. With their permission, I photographed the rooms some of them were staying in. Realizing I only had an hour before the next bus came, I asked them to show me around the rest of the camp. In the short time we had, I developed a sense of the place these people call home. “With all simplicity, we went to go meet our death.” -Sahar Tamem Before I left the camp, I met a refugee named Sahar Tamem. We talked and she invited me to sit with them outside their room. She introduced me to her husband and her friend, Muafek. We shared details of our lives. She happened to be an Arabic teacher just like my mom. She eagerly showed me the small, ordinary room, which was being used as a makeshift classroom. Children’s artwork decorated the walls and words from the last class, which was on Mother’s Day. The artwork still hung on her dry-erase board. I began to photograph the classroom. She called her time in the camp a blessing. Sahar expressed her love for teaching and said everyone should be connected with their origin. “If you understand your language, then you begin to understand the roots of other languages,” Sahar said. “But what if a boy doesn’t know his language?” After every few photos, I put my phone down and looked around for the security guard—I was worried about what would happen if they caught me. With the hour dwindling down, I told Sahar I would be back tomorrow and would see her again, insh’Allah. The following day I returned to the camp—this time with plans to sneak my camera in. I completed the visitor’s routine and took my blue lanyard. I told the security guard I was taking my book bag in with me to complete homework while I spent my time inside, which was a lie. He responded by saying, “No pictures, right?” “Right”, I replied. “No camera?” and I shook my head. I was praying he didn’t ask to search my bag and find my camera body, two lenses and audio recording equipment. With my hands tightly gripping the shoulder straps of my book bag, I walked away from the security guard, not stopping until I was out of his sight. Walking into the camp that day, I felt more welcomed than I had anywhere in Cyprus. With the exception of the the security guard and the workers, the refugees greeted me with a genuine smile as I passed them. I only had a little over an hour to spend at the camp today, and wanted to speak with as many refugees as were willing. I stopped by Sahar’s room and said my hello as she greeted me with a warm hug. Her husband sat outside the room, cigarette in one hand and phone in the other. She was ready to cook lunch and invite me to join them until I told her I needed to catch the next bus running back to Nicosia. She was happy when I told her I sneaked my camera into the camp. She wanted me to take her photograph and share her story with the rest of the world. Muafek offered to introduce me to some more refugees, so we walked up the path in between the makeshift room. “I felt like I was a human and had a right to live.” -Mohammed Mukhalalaty We came across a table of three men sitting smoking cigarettes outside a room. Muafek asked the one closest to him if he was interested in talking with me. The man jumped out of his seat with excitement and we walked to his room. On the way there, he introduced himself to me and said his name was Mohammed and he was from Lebanon. He then started guessing which country I was from by my facial features. It’s common for people to mistake me for being Palestinian or Egyptian, but usually they guess Syrian by the third try—which happens here at home too. I told him I was Syrian and he responded by saying “Sharafna,” which translates from Arabic to “Nice to meet you.” As I gathered my notes and set up my recorder, the small television sitting on their dresser was silently playing the news. The footage of the Brussels airport bombing had been on repeat all day. The TVs at the rest stop near the bus station had it playing on two sets, sitting side-by-side. Making small talk, he asked me if I knew what had happened earlier that day. I shook my head in disappointment at discovering the news. When I miked Mohammed up, the first thing he wanted me and the rest of the world to know was that his name is Mohammed, he is a Muslim, but he is not a terrorist. After the recent bombings from ISIS and now the bombing in Brussels, I can understand why he repeated himself at least three times saying those three lines over again. I peeked outside the window of Mohammed’s room and saw the security guard pass by. I waited as the distance between us grew. Mohammed told me not to worry and that I was under his protection but I couldn’t help but worry. After talking with Mohammed, I learned that he put his trust in God and sailed on a ship to a land unknown. He brought his wife, two sons, sister, brother-in-law and their kids, his pregnant mother and step-father. Mohammed told me about the struggles he has overcome in his lifetime and said Cyprus feels like a heaven to him. He tells me how beautiful his life is now and how happy he is. I asked him how he spends his day in the camp and he said there is not much to do here but sit, drink coffee and smoke cigarettes. He is free from the life he felt trapped in and is finally getting a chance to have a good life for himself and his family. I left that day with a heavy heart—knowing I wouldn’t be coming back to see Sahar, Muafek or Mohammed saddened me. My time there was limited but I was able to grasp the feeling of what life is like there. Some feel trapped and want more than anything to be able to go back to their country; others have never been happier with the time they’ve spent in the camp and believe they are living the best life possible. A month after returning to America, I received a Facebook voice message from Sahar Tamem. I had messaged her before asking her how everyone was doing. She responded with this: I don’t know what Sahar has planned for the future of her and her family, but I have no doubt she will accept whatever fate she is given. She is a strong woman with a passion for teaching the next generation about their origins. Sahar has found her next step, but for many refugees the future is uncertain and the right path is unclear.
In 1988 Pierre Bourdieu chaired a commission reviewing the curriculum at the behest of the minister of national education. The scope of the review was broad, encompassing a revision of subjects taught in order to strengthen the coherence and unity of the curriculum as a whole. In order to inform this work, the commission early on formulated principles to guide their endeavour, each of which were then expanded into more substantive observations concerning their implications. One of these stood out to me as of great contemporary relevance for the social sciences in the digital university. Their principle considers those “ways of thinking or fundamental know-how that, assumed to be taught by everyone, end up not being taught by anyone”. In other words, what are the elements of educational practice which are integral to it and how can we assure their succesful transmission in training? These include “fundamental ways of thinking” such as “deduction, experiment, and the historical approach, as well as reflective and critical thinking which should always be combined with the foregoing” and “the specific character of the experimental way of thinking”, “a resolute valuation of qualitative reasoning”, a clear recognition of the provisional nature of explanatory models” and “ongoing training in the practical work of research”. It extends this discussion to the technologies used in practice: Finally, care must be taken to give major place to a whole series of techniques that, despite being tacitly required by all teaching, are rarely the object of methodical transmission: use of dictionaries and abbreviations, rhetoric of communication, establishment of files, creation of an index, use of records and data banks, preparation of a manuscript, documentary research, use of computerised instruments, interpretation of tables and graphs, etc. Political Interventions: Social Science and Political Action, pg 175 This concern for the “technology of intellectual work” is one from which we could learn a lot, as well as the importance placed upon “rational working methods (such as how to choose between tasks imposed, or to distribute them in time)”. It maps nicely onto what C. Wright Mills described as intellectual craftsmanship. When we consider the technologies of scholarly production – things like notebooks, word processors, index cards, post it notes, print outs, diagrams and marginalia – our interest is in their use-in-intellectual-work. The technologies become something quite specific when bound up in intellectual activity: But how is this file – which so far must seem to you more like a curious sort of ‘literary’ journal – used in intellectual production? The maintenance of such a file *is* intellectual production. It is a continually growing store of facts and ideas, from the most vague to the most finished. The Sociological Imagination, pg 199-200 If we recognise this, we overcome the distinction between theory and practice. The distinction between ‘rational working methods’, ‘technology of intellectual work’ and ‘fundamental ways of thinking’ is overcome in scholarly craft. The role of the technology is crucial here: if we suppress or forget the technological, transmission of these practices is abstracted from their application, leaving their practical unfolding to be something which has to be discovered individually and privately (“ways of thinking or fundamental know-how that, assumed to be taught by everyone, end up not being taught by anyone”). But places for discussion of craft in this substantive sense have been the exception rather than the rule within the academy. Perhaps social media is changing this. It is facilitating a recovery of technology, now finding itself as one of the first things social scientists discuss when they enter into dialogues through social networks and blogs. But it also facilitates what Pat Thompson has described as a feral doctoral pedagogy: Doctoral researchers can now access a range of websites such as LitReviewHQ, PhD2Published and The Three Month Thesis youtube channel. They can read blogs written by researchers and academic developers e.g. Thesis Whisperer, Doctoral Writing SIG, Explorations of Style, and of course this one. They can synchronously chat on social media about research via general hashtags #phdchat #phdforum and #acwri, or discipline specific hashtags such as #twitterstorians or #socphd. They can buy webinars, coaching and courses in almost all aspects of doctoral research. Doctoral researchers are also themselves increasingly blogging about their own experiences and some are also offering advice to others. Much of this socially mediated DIY activity is international, cross-disciplinary and all day/all night. There can be problematic aspects to this. But when it’s valuable, it’s at the level of precisely the unity of thinking, technology and activity which the commission advocated. Social media is helping us recover the technology of intellectual work and it’s an extremely positive development for the social sciences.
Today’s guest post is by Lincoln Stoller, PhD, CHt, a hypnotherapist in British Columbia who writes at mindstrengthbalance.com. Law and policy are not driven by science, they’re driven by expediency and advantage. In the US in the 1980s, psychotherapy was replaced by pharmaceuticals as subsidies for therapy were cut as part of a shift toward managed care. This was in spite of the success of therapeutic models and the lack of managed care alternatives. Today’s interest in the potential of psychedelics are not solely due to breakthroughs in their use. That potential has been noted since the “Good Friday Experiment” of 1962, and there have been sixty years of positive reports since then. What’s different now is not the science, but institutional attitudes and social directions. It appears that the promise of psychedelic-assisted psychotherapy is a leading force for this kind of change, but I suspect that this is more of a common cause and a rallying cry. Psychotherapy, a new “white knight” on the scene, seems to be leading psychedelics in a drive toward legality but there is more going on. As prospective users, or as people who will be impacted by their use, it behooves us to look a little deeper into the alliances that are being formed. Let’s go back and look at some of the history. Scheduled drugs are a broad set of categories used to designate drugs purported to have no socially redeeming value. Schedule I includes heroin, LSD, MDMA, and peyote, which reflects little in the way these chemical are used or their risks. Schedule II drugs include cocaine, methamphetamine, and fentanyl, drugs that are harming more people but less punishable than the schedule I drugs. These categories were ill-advised and hypocritical from the start. Amphetamines were used by all armed forces since WWII. The Axis forces, from those in the field to the High Command, subsisted on continuous drug abuse. US forces—all Allied forces I presume—also used amphetamines widely. In 2003, Dr. Pete Demitry, an Air Force physician and a pilot, stated that the use of speed was “is a life-and-death issue for our military.” What’s different now is not the science, but institutional attitudes and social directions. If you’re sending people out to kill each other, then you might as well extract every last bit of energy from them. A similar argument would hold that providing relaxation justifies servicemen’s use of marijuana, as roughly half of servicemen in Vietnam smoked it. This argument might not apply to the military’s use of opiates, as ten to fifteen percent of US servicemen were addicted to heroin during the Vietnam war. Or maybe it does, if it kept them fighting. But stateside, marijuana was illegal. Marijuana’s illegality was not based on health factors but appears to involve racism and economics. An unpopular increase in Mexican immigration brought the herb into the country in the early part of the century, and the hemp industry threatened the news and paper industry. Waves of laws were passed controlling pot from the early 20s until the blanket illegalization in the 1970s. These laws were peddled under false pretenses and with stunning biological ignorance. They followed the general mandate of outlawing alternative recreational drugs that threatened the alcohol and tobacco industries. These laws were also aimed at weakening the power of those who used them. In 1970, President Richard Nixon spearheaded the Comprehensive Drug Abuse Prevention and Control Act. In 1971, Nixon initiated the “War on Drugs,” whose main goal was not public safety but the consolidation of power. John Ehrlichman, Nixon’s Assistant for Domestic Affairs, made this clear in 1994, long after the fact, when he said, The Nixon campaign in 1968, and the Nixon White House after that, had two enemies: the antiwar left and black people. You understand what I’m saying? We knew we couldn’t make it illegal to be either against the war or black, but by getting the public to associate the hippies with marijuana and blacks with heroin, and then criminalizing both heavily, we could disrupt those communities. We could arrest their leaders, raid their homes, break up their meetings, and vilify them night after night on the evening news. Did we know we were lying about the drugs? Of course we did. Nixon referred to Timothy Leary as “the most dangerous man in the world.” One might say this was a case of the pot calling the kettle black were not the Nixon administration more dangerous by far. Closely correlated with Nixon’s drug war, the incarceration rate of black Americans went up tenfold. In 2008, the Washington Post reported that one in five black Americans would spend time behind bars due to drug laws. Subsequent administrations continued Nixon’s well-documented racism and exceptionalism, funneling money to fascist governments in Latin and South America through selective management of the illegal drug trade. This continued in Panama until it was exposed when Nicaraguan Contras downed a CIA airplane. Then, finally, the CIA had to wash its hands of Panama. The hypocritical War on Drugs continued through all administrations, and continues still. Mexico’s former defense minister was just arrested in Los Angeles, finally identified as the long-elusive Mexican drug kingpin “the Godfather.” He was Mexico’s top military official from 2012 to 2018. The war on drugs has been a war to make money, further political goals, personal interests, and to control people’s minds. It was clearly okay to mess with people’s minds when it benefited our imperial objectives or furthered specific interests at home. And while I think Donald Trump is, himself, about as helpful as global warming, what he refers to as “the deep state” has clearly been up to its armpits in drug money. How would you feel if your personal growth required your doctor’s approval? The pretense of the War on Drugs is falling apart. People with drug addiction were first framed as devils, then criminals, and then as mentally ill. Toxic street drugs are now synthetic. Pot is not only tame by comparison, but a profitable legal business, a source of huge tax revenues, and even therapeutic—it’s questionable how much that really matters, though it’s good for public relations. Now comes the next beachhead: psychedelics. The forces carrying the standard are psychotherapists with entrepreneurs close behind. Given how much graft, evil, and bullshit has been perpetrated so far, just how warmly should we welcome these new initiatives? Are we trading illegality for medical restrictions in a situation where psychedelics should be legal for all? The inclination of the medical industry is not clear but, in the past, these organizations have struggled for the right to control medication. I’ve spoken to leaders who recognize both the risk and the potential for moving restrictive powers from one authority to another. The general view is that this is an opening. The hope is that as the population becomes more experienced, they will become more accepting. The general population has been clueless about the undercurrents of drug policy. How well-informed are these psychologists and entrepreneurs regarding the truth of the fray they’re entering? We may be enthusiastic to join enlightened psychologists in removing old restrictions, but what new rules are we working toward? If you want to be treated with psychedelics, then you’re going to need to have a diagnosis. The APA serves three audiences. They serve the needs of their members which includes psychiatrists, psychologists, social workers, and their own management. They serve the public and public institutions which include your insurance company, courts, and legislatures. And they serve patients and clients of mental health services. Because this mandate and these services are so broad, you will not be surprised that there’s always a lot of discussion about every element of the DSM, from its general structure to its individual diagnoses. Most users of the DSM think it was written just for them. Health service providers largely believe that the ailments described are real things that truly exist in people, for which they need therapy and from which they seek a cure. Clients involved in the legal system believe mental competence is well defined through reference to the DSM though, if pressed, attorneys will quickly admit that it’s what succeeds in court that counts, not what works at home. Hospitals and insurance companies are even less concerned with arguing the fine points. For them, the DSM is a structure they can build upon, and the truth is measured by their bottom line. It is in this context that you must understand that the DSM is not written to meet any of these needs. It is written to provide a language for description and discussion of major social trends and dominant personal conditions. Its descriptions are not diagnoses in the usual medical terms, but rather behaviors and presentations that more or less define collections of people. There is no pretense of identifying disease or cure. The DSM really is a dictionary, a description of ideas, not an explanation or a proof of anything. When it’s said that psychedelics are being used for psychotherapy, what is meant is that psychedelics are being used to move people from one category of the DSM to another. You may think that what is meant is that your trauma, dependence, or depression might be cured, but cure is not part of the DSM. What is meant is that your presentation will be shifted such that you no longer fit the category. That is to say, it’s not what you think that counts, it’s what the doctors think, and it’s always this way when you surrender yourself to experts. In these turbulent times, great cures are heralded as coming out of the jungle and from new studies. New cures are trumpeted for illness and addiction, and new means for revelation and personal growth. But as other ailments have shown, and as your insurance company will attest, it’s the people who set policy who will determine how these resources are made available. If we leave the power for change in the hands of the psychological and medical establishment, then it will be case in the terms currently established by the DSM. Mind expansion, revelation, insight, universal consciousness, and divine connection are not categories in the DSM. If you want to benefit from mental health services then you’re going to need to present yourself as someone with a recognized dysfunction. If you want to be treated with psychedelics, then you’re going to need to have a diagnosis. Right now, as trials and experiments are being done, the criteria are lax. You can be included in a trial based on whatever the experimenters decide. But once treatments are established, you’ll need to fit within the paradigm. It’s not clear how narrow or carefully controlled these parameters will be. Currently, Canada is moving to legalize psilocybin for terminal patients. That might mean you need to qualify for hospice, and this is probably something you’d rather not be qualified for. We may be enthusiastic to join enlightened psychologists in removing old restrictions, but what new rules are we working toward? MDMA is being tested in a therapeutic context for various forms of distress and dysfunction, but that’s not what most Ecstasy users are using it for. Should you need to be sick or dying before you’re allowed to use psychedelics? The history of drug legalization spans everything from the eminently reasonable to the totally corrupt. Poisons should be regulated, and all medicines are poisons. Yet most poisons are not regulated. You can make a fatal brew from the Foxglove you’ll find in almost every garden. Sales of Drain-o are not regulated and it will kill you quick. Sales of alcohol are regulated, but you’d be hard pressed to take a fatal dose. You can buy as much ethyl alcohol as you want if you pay the government tax, but you can’t buy any if you don’t. Ether is now considered toxic and you need a chemical permit to buy it, but it used to be freely available and people made it into cocktails. Who gets to say what poisons are legal? Psychedelics are illegal out of medical ignorance and for political advantage, and remain illegal for those reasons. Accepting their use for psychotherapy proposes a laudable medical use, but does not address the enduring issues of ignorance and manipulation. We would like to think that this is a step in the right direction, but where do these steps lead? Are we surrendering control to a more trusted authority? We’ve been trained to trust our doctors. I’m a doctor—a doctor of physics—and a therapist, and I know better than to trust doctors and therapists. I don’t trust them in physics and I don’t trust them in medicine. Doctors and therapists are trained as practitioners within their scope of practice. Psychedelics are outside their scope of practice. There is little chance that revelation will fit within this scope. How would you feel if your personal growth required your doctor’s approval? If these new initiatives lead to more exploitation and control for profit and politics, then we should not be so enthusiastic. Psychedelics hold more promise than returning things to normal, and I object to the limited view that psychedelics should only serve psychotherapy. If what we want is intelligence, insight, and autonomy, then we should remain attentive to the full promise of personal growth for the reform of individuals, society, and ecology. Psychedelics are not toxic medicinals, they have the potential to be consciousness expanding entheogens. This is a far larger goal that assisting psychotherapy. We must keep these larger personal and social goals in our sights.
While communicating on an unsecured medium like the internet, you have to be careful about the confidentiality of the information you are sharing with other. The are two techniques use to preserve the confidentiality of your message, Symmetric and Asymmetric Encryption. The fundamental difference that distinguishes symmetric and asymmetric encryption is that symmetric encryption allows encryption and decryption of the message with the same key. On the other hand, asymmetric encryption uses the public key for the encryption, and a private key is used for decryption. To acknowledge some more differences between symmetric and asymmetric encryption have a look at the comparison chart shown below. Definition of Symmetric Encryption Symmetric encryption is a technique which allows the use of only one key for performing both the encryption and the decryption of the message shared over the internet. It is also known as the conventional method used for encryption. In symmetric encryption, the host that are participating in the communication already have the secret key that is received through the external means. The sender of the message or information will use the key for encrypting the message, and the receiver will use the key for decrypting the message. The commonly used symmetric encryption algorithms are DES, 3 DES, AES, RC4. Definition of Asymmetric Encryption Asymmetric encryption is an encryption technique that uses a pair of key (private key and public key) for encryption and decryption. Asymmetric encryption uses the public key for the encryption of the message and the private key for the decryption of the message. The public key is freely available to anyone who is interested in sending the message. The private key is kept secret with the receiver of the message. Any message that is encrypted by the public key and the algorithm, is decrypted using the same the algorithm and the matching private key of corresponding public key. The asymmetric encryption algorithm execution is slow. As asymmetric encryption algorithm are complex in nature and have the high computational burden. Hence, the asymmetric encryption is used for securely exchanging the keys instead of the bulk data transmission. Asymmetric encryption is generally used for establishing a secure channel over the non-secure medium like the internet. The most common asymmetric encryption algorithm are Diffie-Hellman and RSA algorithm. Key Differences Between Symmetric and Asymmetric Encryption - Symmetric encryption always uses a single key for encryption and decryption of the message. However, in asymmetric encryption, the sender uses the public key for the encryption and private key for decryption. - The execution of asymmetric encryption algorithms is slower as compared to the symmetric encryption algorithm. This is because the asymmetric encryption algorithms are more complex and have a high computational burden. - The symmetric encryption algorithms that are most commonly used are DES, 3DES, AES and RC4. On the other hand, Diffie-Hellman and RSA area the most common algorithm used for asymmetric encryption. - The asymmetric encryption is generally used for exchanging secret keys whereas, the symmetric encryption is used for exchanging a bulk of data. Being a complex and slow encryption technique the asymmetric encryption is generally used for exchanging the keys and the symmetric encryption being a faster technique is used for bulk data transmission.
If you know me personally, follow me on social media, or have spent any amount of time on this website, you probably know that I LOVE Google! We were still on USB Flash drives when I started teaching, which work great, but… Then…Google Drive Came Out …and it changed my world! Other than the search engine, Drive was my first taste of all that Google could really do. Google just keeps improving all of their products so much each year. Once I was hooked, and they came out with the Google Certified Educator program, I knew I wanted to be part of it. One of the most frequent questions I get is “why did you become a Google Certified Educator?”. A lot of times, that is alongside the question “aren’t you a band director?”. Let me give you some insight: Why I Became Google Certified The Google Certified Educator program was appealing to me because I wanted to use the tools that I already found valuable in a more efficient way. The process to get certified includes “modules” to work through with plenty of opportunity to practice and identify where to get help. This alone is incredibly valuable! In addition to learning about features of Google apps that I already knew pretty well, I found that there were other Google tools that I had not even heard of. I discovered and learned about Google Keep through this process and I can confidently say that I now use that tool every.single.day. The Badges. I am also a sucker for badges, ya’ll. Who knows where that comes from, but I have definitely been known to run an extra Disney race because there is a special medal attached to it! I am happy to display my Google Certified Educator Badges here on my website and in other relevant places because I worked hard for them! The Certification Process First important thing is that the training modules are FREE! This is an incredible resource to be offered at zero cost. Just like anything in life, you will gain exactly as much value as you want to out of these modules. There are plenty of opportunities to practice and the modules will direct you to the Google Help resources where you can learn even more. It is completely possible for you to complete the modules just for your own personal growth and never take the certification tests! The tests cost a small fee, are timed, and are fairly rigorous. When you register for a test, a special Google account is created for you. Once they create this, you will receive the login info and can begin the testing process. You will be videoed taking the test and, even if you are usually a fast test-taker, you may find that you take the entire amount of allotted time just because there is a lot of material. Part of the test is a pretty standard question and answer situation. The rest of it is you showing what you know. You may, for example, be asked to open a file that is in your Drive (the special test account one), add to it, connect it to another document or share it somewhere, and then send an email about it. I think it is a pretty good assessment of how well you know about and can manipulate the different Google apps. Once you are finished with the test, you submit it for scoring and wait for your results via email. If you pass, congrats! You can only become certified if you pass each test. To be clear, there is a test for Level 1 and a different test for Level 2. Why is this valuable to Music Educators? I teach because I love sharing music with my students. My favorite times are when I am making music with kids, or JUST teaching. How often do we have a day, in a school setting, full of JUST teaching? Basically never. There are always grades, paperwork, trips to plan, rooms to reserve..the list goes on and on. Using Google tools makes all of that stuff EASIER! Using these tools gives me more time to be a music teacher! I can save a flyer in Drive and then attach it to my Google Calendar and share it with my Google Classroom in a matter of literal seconds. Each week, I make a “to do” list on Google Keep to share with my colleague and have it open next to my Google Calendar where our weekly lesson plans are. I am all over making a BOSS spreadsheet to track attendance that syncs up with itself…and share it with my colleague who can update it as needed. In my scope and sequence spreadsheet, I can add links to different handouts and worksheets that we will pass out so that I don’t have to hunt them down in Drive. Ya’ll. CULTIVATE YOUR EFFICIENCY by using these Google tools. The more you do, the better you get, and the more you’ll be able to do in the future. Why is this valuable to Music Educators? We run PROGRAMS, not just classrooms (though, classroom teachers: get in on this action, too!) We have a lot to do and spin LOTS of plates every day. Save your sanity and learn how to use tools that were DESIGNED for efficiency and seamless connectivity. Is this right for you? Okay…I think you can guess that yes, I think this is right for you! You might want to start with just going through the modules and seeing how quickly you can absorb the information and put it into practice. Here’s the link to the Google Teacher Center – check it out and see what you think. There are several certification level options that you can find out about there. Take the tests when you’re ready. There is no timeline to be ready for the tests until you register; at that point you have a limited amount of time. You will learn from taking the tests, as well because you’ll be practicing what you’ve studied! Certified Educator Certifications are good for 3 years before a re-test is needed. I’ve been through the process twice now. The renewal certification was easier for me than my first certification. I’m sure this is because I USE these tools on a regular basis now. Like I said earlier, Google is constantly updating and improving their tools. That being the case, a quick refresher every 3 years is a good idea!
In 2017, there was a change of government in New Zealand. At the State Opening of Parliament after the election, the Speech from the Throne announced a new vision for the country’s economic policy: “We need to move beyond narrow measures and views of value and broaden the definition of progress. The economic strategy will focus on how we improve the wellbeing and living standards of all New Zealanders.” However, the government recognised that much of our contemporary understanding of economic progress and wellbeing, including the OECD’s framework, has been informed by Anglo-Saxon philosophical traditions Therefore, the government also worked to ensure that diverse communities were able to contribute their voices to defining wellbeing in New Zealand, based on their cultural perspectives, values and knowledge systems This involved an initiative to develop a vision of wellbeing, based in mātauranga Māori, the knowledge of the country’s Indigenous population. The name for this Wellbeing Framework is He Ara Waiora, meaning “pathway to wellbeing”. The framework demonstrates relationships between different elements of wellbeing. At the centre is Wairua (spirit), which reflects the values, beliefs and practices that are the foundation or source of wellbeing. Surrounding the spirit (in green) is Taiao (the natural world/environment), which is presented as the foundation and source of social wellbeing. The next circle, in red, is Ira Tangata (society), which encapsulates human activities and relationships. The concept of mana (power) is seen as vital for wellbeing, with people thriving when they are empowered to grow and develop, connect with others and have the resources they need to flourish. The outside circles (blue) present principles to guide how people should work together to achieve wellbeing. These emphasise the importance of coordination and alignment, working in partnership and according to the right processes, promoting collective and strength-based actions, protecting and promoting empowerment, and stewardship of the environment.
The 3 stages to money success (part 2: disaster protection) So you’ve got your percentage splits sorted, and you're on the road to creating your freedom fund. The next step is making sure that you, your wealth and your family are protected should something bad happen. Not the most fun subject to think about, but the stats are pretty scary when it comes to our chances of getting ill. Despite this, it’s incredible to think that we are twice as likely to insure a pet than we are our partner! It’s also important this is prioritised above investing (step 3) because we are protecting our loved ones, the money we’ve already accumulated and the money we have the potential to accumulate. There’s no point saving for 15 years, building up a nest egg, then having to use it all in an emergency. Equally, the last thing we want is to interrupt the wealth creation process. Just think, if you earn £50k per year and still have 20 years of work ahead, your personal earning potential is £1million. Any time off work due to an accident or illness can have significant impacts on your future lifestyle. So, step 2 to money success is disaster protection. This falls into two brackets: Life insurance to cover the surviving partner or children should the unthinkable happen Income protection / critical illness to protect the money you’ve accumulated or have the potential to accumulate. This essentially provides peace of mind should a serious accident or illness occur To calculate how much life insurance you need, think about what you would be losing financially should different scenarios happen. So, for life insurance, calculate: Your total debts such as mortgages, loans and credit card balances The cost of a funeral (in the UK this is around £3k - £5k on average) What amount the surviving partner will need each month and how long for So for example: In addition to life insurance and to protect you, your family and your savings there are a couple of options: Income protection - pays you a monthly income for a set period of time should you be off work due to a predefined illness. The amount is generally limited to 60% of your gross monthly earnings Critical illness - this pays a lump sum should you be diagnosed with a critical illness. Some people like to have enough cover to pay off their mortgage, others are happy with an amount that would cover them for a few years, should they need to be off work or make adjustments to their house Applying for life insurance, critical illness or income protection can, in some circumstances be a lengthy process. Headline rates you see on some comparison websites are not necessarily the final amounts you will pay. Underwriters often ‘load’ policies due to medical conditions, smoking status, alcohol consumption or above average BMIs. Because of this we would generally recommend approaching underwriters before applying to one insurer. That way, you already know a pre-underwriting decision, and can go with the insurer with the least amount of loading (if there is any at all). For critical illness and income protection It’s also worth analysing the payout history of the insurer too.
Las Hogueras de San Juan – the Bonfires of Saint John – is by far the most important event in Alicante's cultural calendar, and it stands out among the most popular festivals in all of Spain. A truly amazing experience, the somewhat extravagant celebration will certainly provide you with an exciting time and a wealth of anecdotes that will link you closely – inextricably, even – to this beloved Mediterranean city. Here at Enforex, we want to encourage our students to experience as much of Spain's incredible culture as possible. Festivals are a wonderful way to have a first hand taste of the immensely varied range of traditions that define each city and town, which is precisely why our Spanish school in Alicante is open all year round... Even during holiday seasons! Learn more about our year-round Spanish courses in Alicante here: Alicante Courses If you can't make it to Las Hogueras this year, the next best thing might be to read about it to learn all the background information about this unique festival and get prepared for the occasion when you might be able to visit. We guarantee you, after reading this section you will be left pining to witness the real thing! When and Where is Las Hogueras? While the whole festival lasts for five days, the inauguration of the fiestas comes at midnight, on the evening of June 20, as the summer solstice takes place. Evidently, a pagan rite connected with the agricultural implications of the arrival of the summer, huge bonfires have been built in Alicante to celebrate the occasion for centuries. It is an autochthonous Alicantine tradition to jump over the bonfire, although in recent years the size of the fires have made this far harder, and also unreasonably dangerous!. Despite its obvious pagan root, the Hogueras have long been linked to the feast day of saint John, which is celebrated on June 24. That is precisely the day when Las Hogueras come to an end. While Saint John's Eve is celebrated all over the country, from Barcelona to Vigo and all the way down to Andalucía, nowhere are the arrangements as sumptuous and as striking as in Alicante, where the entire city comes, quite literally, to a full stop in order to pay their respect to the tradition in the rightful fashion. So, for the maddest and most famous bonfires in the country, be sure to head to the Mediterranean shoreline of Alicante. Las Hogueras Traditions The most famous part of Las Hogueras de San Juan is not surprisingly the blazing hogueras (bonfires). Traditionally, these Hogueras were nothing more than piles of junk and old furniture collected during massive summer house-cleaning. Constructed of wood and paper maché, today's Hogueras are elaborate works of art, reminiscent of the massive structures of Valencia's Las Fallas. On the big night of the hogueras, fireworks are set off from a bay in the shape of a palm tree on the picturesque castle of Santa Bárbara, just above the bay of Alicante, to mark the beginning of the fiestas with a literal bang. From that moment on, Alicante's 88 hogueras light up the night one by one. Traditionally, once the fire was blazing, you had to jump seven times around the flames or go in the water, superstitions which resulted in the magical atmosphere that continues to shroud the night of San Juan. Although the fires eventually die down – nowadays thanks to the firemen – the party continues throughout Alicante for several days and nights. Rockets blast off, balloons float up in the air, parades snake through the streets, and fireworks festively illuminate the sky each night at midnight. Finally, there is an annual competition to elect the next Belleza del Fuego (Beauty of the Fire), who then serves as the festival's queen along with her six ladies of honor. This tradition has been going on for centuries, but the Hogueras and the accompanying festivities that you see today have existed in their present form since 1928. The celebratory bonfires were originally part of an agricultural pagan ritual marking June 21, the longest day of the year. The religious undertones of Las Hogueras, and its attachment to the feast day of San Juan (Saint John) came as an afterthought.
A dog is a Man’s best friend but how much do we really know about this our cool friend? Below, In this article, we have put together some uncommon dog facts that will wow you. 1. Dogs can learn up to 1,000 words. 2. Tail wagging is a good sign of a really happy and nice dog. 3. Within the first 16-20 weeks of birth, puppies grow up to half their body weight. 4. Puppies sleep close to 20 hours daily during their body growth phase. 5. The Grey Hound which is the fastest breed of dog can sprint up to 44 miles per hour. 6. There are over 400 million dogs in the world. 7. Just like humans, dogs do have dreams. 8. The heaviest breed of dogs which are known as the Mastiff weighs up to 200 pounds. 9. While adult dogs have 42 teeth, puppies have 28. 10. The shortest breed of dog is called a Chihuahua. 11. The tallest breed of dogs is called the Irish Wolfhounds and is about 30-35 inches tall. 12. The best time to bring a puppy home is between 6-12 weeks of birth. 13. When overheated or stressed out, a dog’s sense of smell is reduced by 40%. 14. Unlike humans who have over 9,000 taste buds, dogs have just 1,700 taste buds. 15. Research suggests that over 63.4 million households own a dog in the US. 16. Dogs often sweat through the pads of their feet. 17. In 1969, the first-ever dog to be inducted into the Animal Hall of Fame is called Lassie. 18. Petting a dog can help lower human blood pressure. 19. Just like their descendant’s Wolves, dogs like to move in packs. 20. The Basenjis are often referred to as Barkless dogs due to the unusual shape larynx that makes them produce an unstable sound. 21. Unlike humans, a dog’s sense of smell is 10,000 times stronger. 22. Dogs can be trained to detect diseases in humans. 23. Just like humans, dogs do get jealous. 24. The average body temperature of a dog is usually between 101-102.5 degrees. 25. Dogs are mentioned in the Holy Bible more than 30 times. 26. On average, 15 people die from dog bites in the US annually. 27. Obesity is the most common problem among dogs. 28. Over one million dogs in the US have been named as beneficiaries in their owner’s will. 29. Just like humans, dog nose prints are unique and can be used to identify them. 30. The heaviest breed of dog is called St. Bernard. Continue reading to uncover more interesting dog facts. 31. Unlike humans, dogs don’t have appendices. 32. Just like humans, dogs do have prostrates. 33. Statistically, humans have been keeping dogs as far back as 12,000 years ago. 34. 58% of people put dogs in their family portrait. 35. The term “raining cats and dogs” was coined back in the 17th century in England. It was believed that a lot of cats and dogs died due to the heavy downpour during those periods. 36. Dogs can read humans facial expressions. 37. A vast majority of US Presidents have been dog owners. 38. Just like humans, adult dogs do have dementia. 39. There are over 703 breeds of purebred dogs in the world. 40. On average, a city dog lives three times longer than a country dog. 41. The famous Dalmatian dogs are born white and develop their spots over time. 42. Grapes can cause kidney failures in dogs. 43. The oldest dog in the world died at the age of 29. 44. Dogs are not color blind as people often claim. They can see yellow and blue. 45. A 15-year-old human is as matured as a year old dog. 46. The Afghan Hound is believed to be the dumbest dog in the world. 47. Smaller breeds of dogs mature faster than larger breeds. 48. Long toenails are one of the major causes of disorder in dogs. 49. Some dogs are very good swimmers. The Newfoundland dogs are very comfortable in the water and are been used as water rescue dogs 50. Dogs have up to 18 muscles responsible for controlling their ears. 51. Over 45% of dogs in America sleep on their owner’s bed. 52. Contrary to popular opinion, the Australian shepherd is actually from America, not Australia. 53. Dog noses are usually wet so that they can absorb scent Chemicals. 54. The tallest dog in the world is about 44 inches tall and is called Zeus. 55. The Grey Hound dog will defeat a cheetah in a long distance. 56. Over 30% of Dalmatian dogs are believed to be deaf in one ear. 57. The Saluki breed is believed to be one of the oldest breeds of dogs in the world. It dates back to 329BC. 58. The Chow-Chow and the Shar-peis are the only dogs in the world known to have black tongues. 59. Three out of twelve dogs on the famous titanic ship survived. 60. Gazing into a dog’s eyes releases a hormone called “Oxytocin” which is good for both the person and the dog. 61. According to research, dogs were domesticated between 9,000 and 34,000 years ago. 62. On average, a dog lives between 10-15 years on earth.
Though blue agave-based tequila is riding the popular liquor wave right now, very few realize how integral the whole family of agave plants was to the lives of Mesoamerican people throughout the ages. This documentary, narrated hauntingly by Edward James Olmos, dives into that cultural, spiritual, economic, and culinary history while also revealing a rebirth in respect for the plant in the present day. Why make a documentary about agave? It is the second in our series on indigenous foods of the Americas, the first being a documentary film and book about chocolate called Chocolate: Pathway to the Gods. Both chocolate and agave were critically important resources to native Mesoamerican peoples thousands of years ago and this is little recognized. In the end, we felt that agave would make a very compelling story that would link our present love affair with mezcals and tequilas with their fascinating history. Do you think many people realize how many uses there are for the plant? People who have lived in or spent considerable time in Mexico probably know this as do many gardeners, botanists, archaeologists and other specialists. But no, most people are probably not aware of the many potential uses. Why are most not aware of how critical agave was to the early Americas? On the one hand, few people seem knowledgeable about the detailed use of any specific resource in the prehistoric era. On the other hand, archaeologists themselves have only begun to realize this signal importance over the past few decades as more data have become available on widespread uses and the domestication of agaves throughout what botanist Howard Scott Gentry called “Agaveland.” How is the Mescalero tribe doing? The Mescalero face many of the same challenges as other Native American groups do in today’s society, but they are a proud people and willing to overcome whatever obstacles come their way to preserve their ancient culture. The population has been growing slowly throughout the 20th and early 21st centuries. From only a few hundred peoples when the reservation was established at the beginning of the twentieth century, the tribe has grown to more than 3,000 members today. Agave no longer has the importance within the tribe that it once had, but all of the members recognize its ceremonial role and it is likely to continue to be a part of ritual. Many tribal members also seem to still consider it a sweet treat. The interdependence of agave and humans is also quite fascinating. Agave’s many uses made it a critical early resource. Throughout Mexico and the American Southwest, native groups used it as a primary source of fiber for tools, construction, and clothing. The use as fiber may have been the earliest and most important. We suspect it is from this early use that images of rope became an integral part of imagery in early Mesoamerican art, from cave art such as that seen at White Shaman shelter in our documentary to rope imagery in important cultural contexts among the Olmec and the early Zapotec capital at Monte Alban in Oaxaca. Even today many people in Mexico depend upon agave for a livelihood — to the point where Eric Hernandez, the mezcal maker we interviewed in the film, said at one point that he though the Mexica flag should have featured an agave plant instead of a prickly pear. How did you secure Edward James Olmos as the narrator? We always felt that the narration would be critical and wanted an authoritative and perhaps somewhat exotic or mysterious sounding voice to bolster agave’s role in ritual and ceremony. At one point we considered having a female voice narrate the film, speaking for the goddess Mayahuel. In the end, we thought Olmos’ low-pitched, breathy voice conveyed exactly the feeling we wanted. We didn’t think we had a chance to recruit him, however, for our modestly budgeted documentary, but our cinematographer and our editors encouraged us to try, which we did. To our surprise, he accepted, saying that he thought it was a good film that made a very important point about losing our connection to the past. What do you hope will be the impact of this film? We hope that many viewers will see the agave plant in a different light. But even more important, perhaps, we would like people to understand that, for many traditional peoples, plants are much more than calories or a sweet treat, but rather an integral part of their culture and identity, even worshipped as gods in some cases. Agave is just one beautiful example of how a plant can be so much more than just a plant in a cultural context.
Ella Laski ‘23 gives a history of the attacks on September 11, 2001, and tells readers how Perkiomen School remembers 20 years later. September 11, 2021: on this day we marked 20 years since America experienced the largest foreign attack on domestic soil throughout United States history. Tuesday, September 11, 2001 was a day that left a permanent stain on our country’s history. Every American who lived through that horrible day remembers exactly where they were and what they were doing when the first hijacked plane hit the North Tower. Everyone paused and turned on their radios or televisions to see what was going on. Nothing could have prepared America for the events of that dreadful Tuesday morning. About 3,000 people woke up and left for work or to board a flight. According to many, it was a beautiful morning, and the East Coast opened its eyes underneath a cloudless blue sky. Unfortunately, that brilliant sky would soon be filled with clouds of smoke and despair. Al-Qaeda, a terrorist group based in Afghanistan founded by Osama bin Laden, carried out four terrorist attacks that day. Nineteen terrorists hijacked four commercial planes, causing destruction and fear across the United States. The initial attacks took place in lower Manhattan at the World Trade Center, a massive business complex with the 110+ story Twin Towers as the focal point. American Airlines Flight 11 crashed into floors 93-99 of the North Tower at 8:46 a.m. United Airlines Flight 175 crashed into floors 77-85 of the South Tower at 9:03 a.m. Between 16,400 and 18,000 people were in the buildings as they were struck. Both buildings erupted into flames and collapsed due to the jet fuel from the planes. President Bush was in Sarasota, Florida, visiting a second-grade class when his chief of staff whispered in his ear at 9:05 a.m., “A second plane hit the second tower. America is under attack.” Bush departed on Air Force One and landed in Barksdale Air Force Base in Louisiana, not returning to Washington until that night. Less than an hour after President Bush found out about the occurrences, the next attack took place in Washington, DC. American Airlines Flight 77 crashed into the southwestern side of the Pentagon at 9:37 a.m. The last hijacking, United Airlines Flight 93, was supposedly aiming to hit the Capitol. However, informed of the occurrences by phone, its passengers took over the plane and crashed it into a field in Shanksville, Pennsylvania at 10:02 a.m. I was not yet alive to witness the way America wept that day. However, every year, I learn something new about what happened and always lean in to hear stories, whether it be in an article, on television, or from someone else. I’ll never forget my mom telling me about how she was on her way to work in New Jersey, listening to Britney Spears on the radio, as an announcement came through about what was going on. I’ll never forget my second-grade teacher telling me about how she was a young child sitting in class when another teacher ran into the room and told them to turn on the television. I’ll never forget hearing a 911 call of a woman, working on floor 83 in a financial firm, praying the Hail Mary, and calling God’s name. I’ll never forget visiting the 9/11 Memorial with my eighth-grade class and seeing someone find a name inscribed and begin to cry. That same day, I visited the small church across the street, St. Paul’s Chapel, which was miraculously untouched by the debris from the heaps of misery surrounding it, where firefighters went to rest and volunteers provided food for the search and recovery personnel in the days following the attack. This year, I listened to Mrs. Weir-Smith tell the Perkiomen community about one of our own, Eric Sand, class of 1984, who passed away in the attacks while working at Cantor Fitzgerald, a firm that lost every employee that came to work that Tuesday- 685 lives. At Perkiomen, Eric was involved in football, basketball, baseball, newspaper, proctors, chorus, and more. He went to Tulane University where he majored in philosophy. Eric was a father, and especially loved his job at Cantor Fitzgerald because he was able to come home around 4 p.m. to teach his young son baseball. Eric took the job in January 2001 after he decided not to continue his music career. Like clockwork, he would call his wife around 8:45 every morning to tell her that he loved her as he had to leave early to take the train into the city. On September 11, 2001, his routine call was different. His mother-in-law picked up the phone and heard that he was trapped in the compromised building. His office was on the 106th floor. She called quickly for her daughter, but while she waited, the line went silent. After the attacks, Eric’s body was recovered. The family was incredulous, but one small detail helped them to believe. He had a small cloth in his pocket used to clean his glasses. Even while at Perkiomen, he was always wiping down the lenses. In memory of Eric Sand, Mrs. Weir-Smith told the Perkiomen community to put in the effort to get to know the people around you - ask them about their day, strike up a conversation, possibly even ask why they are always cleaning their glasses. These are words to live by, and since hearing this, I was inspired to do just that. September 11, 2001 stands among the worst days in our country's history that we as Americans vow to never forget. May we never forget all those who ran towards the scene as everyone else was running away. May we never forget the approximately 400 first responders who took that call knowing it could possibly be their last. May we never forget the 2,977 souls who fell victim to Al-Qaeda’s acts of cowardice and evil. May we never forget the families and friends who were never able to say goodbye. May we never forget the brave men and women who fought in the war against terror to protect our homeland. May we never forget the way America stood shoulder to shoulder, flying any American flag able to be found and volunteering to help in any way they could. May we never forget that throughout any hardship America faces, our beautiful, resilient country is home to countless heroes. May we as a Perkiomen community never forget Mrs. Weir-Smith’s imperative reminder to get to know those heroes that we can call our neighbors and friends.
Even from space enthusiasts, the ‘small stuff’ in our solar system typically gets less attention compared to its major bodies. Yet it is well known that meteorite impacts Even from space enthusiasts, the ‘small stuff’ in our solar system typically gets less attention compared to its major bodies. Yet it is well known that meteorite impacts pose a real, albeit small, risk to our modern society. Extensive evidence has been collected by now that show that the extinction of the dinosaurs was caused by such a cataclysmic event. But in contrast with the dinosaurs, we humans fortunately have astronomers and space agencies that can assess and mitigate these risks. Marking the United Nations International Asteroid Day, June 30th, the NVR is happy to present to you a program of three exciting talks on this topic: Starting off, Detlef Koschny, acting head of ESA’s Planetary Defence Office, will summarize the activities that ESA is undertaking to deal with the threat of a potential asteroid impact; from observations, trajectory modelling to determining the properties of a potential impactor and dissemination of advance warning data. Next, Danielle Pieterse, astronomer at Radboud University in Nijmegen, will present the results of a recent survey of asteroids and Near-Earth Objects, conducted with the MeerLICHT telescope in SouthAfrica. MeerLICHT has been developed by the Netherlands Research School for Astronomy (NOVA) and is a prototype for an array of three telescopes in Chile. Finally, Felix Bettonvil, project manager for NOVA, will present what he does in his free time: Develop and operate cameras to catch fireball events, bright meteors with a high potential of material reaching earth as meteorites. Felix will also discuss the effort to retrieve meteorites and what these can tell us about their origin. During the event we will also sign a Memorandum of Understanding (MoU) with the Space Society Twente 19:45 – 20:00 Opening and walk in Remo platform 20:00 – 20:05 Welcome by Michiel Rodenhuis 20:05 – 20:35 Presentation Detlef Koschny – ESA 20:35 – 21:05 Presentation Danielle Pieterse – Radboud University Nijmegen 21:05 – 21:35 Presentation Felix Bettonvil – NOVA 21:35 – 21:45 NVR & SST MoU Signing 21:45 Closing and online networking 22:15 End of event
Stardust Transit by Shiloh Sophia “I have been traveling for some 70,000 years to arrive at this point on the path, on the Earth,” she said, as if it were the most normal thing in the world. I replied, “That is quite a long time—have you brought your cherished beliefs along with you?” “No” she said. “I lost them along the way, but only a few years back. I carried them with me for a long time. But they are gone now. I feel quite liberated on one hand, and rather lonely on the other.” With that, she put her head down, as if she were looking at something far below. “I see. Well, regardless of how you feel about it, that is a cherished place to arrive at. Even if you can no longer hold onto what you thought you knew. Especially what you fought for and stood by and told others about. It can be good to let those go. Except for one, of course—but that one isn’t a belief; it may be the only true thing humans ever arrive at,” I said, hoping she would pick up the thread—or perhaps I was offering a riddle. She was quiet for a long while, and somberness seemed to settle about her as she contemplated my words. “I think I know about that one true thing. As I search myself, I can’t find anything else that really matters. Yet, it may be the hardest thing of all. And most of us don’t really practice it—but some of us knew about it because it was shared with us. We reached for it, spoke about it, sang after it, shared it, offered it, surrendered ourselves to it. We even put it into our art. That one true thing is love. But not love in concept—love in action. Unreasonable love. Love that makes you do things you don’t always want to do. Love that makes you go out of your way, makes you take risks for other humans your brain tells you are going to be too much trouble. Love, the kind of love that only feels worth it later, after your heart has been opened one more time to see the realness of humanity. It isn’t that we are broken. We aren’t. Rather, we think we are not made to open, but we are. We are made to open, even if it hurts. We call this heart-breaking, but it is more like expanding, pulsing, re-imagining, she-shaping, moving from a previous framework. Rather than contracting or breaking or cracking. Without the opening, the love doesn’t come out. All this avoiding the pain thing has really caused a lot of problems.” “Yes,” I agreed. “I think they introduced that particular version of not getting your heart broken around the 1920s. It was a tragic introduction that got perpetuated by modernism, sexism, existentialism, and other isms that shaped our thoughts without regard for the real stuff of life. Yes, love. That is the one true thing. The thing we rarely arrive at but are always seeking, while also avoiding. The ultimate paradox. Love is going to lift, but love is also going to call down the thunder,” I added, feeling this in my own body. I watched as the artist got up and began to move her hands over the painting she had just finished. A masterwork, over eight feet by twelve feet on wood, it was quite formidable to behold, even for me—and I have seen a lot of attempts at being fully human over the past few thousand years. I went up to the painting and pointed to a shape, asking a question with the expression on my face. “Oh that,” she said affectionately.” Those are the shapes of prayers. That bundle of dots was for the entire planet, and there are some tears mixed into the paint. Tiny dots and marks we add when someone is suffering.” “Who is ‘we’?” I asked. “You said, ‘We add when someone is suffering’—to whom are you referring? “Well, there are around 10,000 of us. And when someone is hurting, we add dots. We call them prayer dots. You could be the one suffering, and it helps you feel that pain and move it along into form. Or it could be someone else you know. Or even many millions that you don’t know. Some of us, over time, develop an unreasonable desire to end suffering. We pray for all beings. Somehow, we are actually doing that. Some of us actually believe it helps.” She was looking deeply at the dots now, remembering when she had put them there over thirteen moons before. “You said the word ‘believe’—I thought you said you have given up all belief? Have you, or haven’t you?” “I have given up belief in my old traditions, my old gods, and my old stories of how this all works. I have given up defending any sort of religiosity or political stance or asserting a certain scientific phenomenon—those are all a continuum. I guess the beliefs I have are about how love actually works.” “How does love work?” I asked, walking to the other side of the painting to give her space while keeping my focus on the shapes. I wanted to see what she said here—this was a pivotal moment in her development, as a human. “Love is something you live. You can’t wait to feel it—you just start practicing it anyway because you know that there is something here to explore. Then, as you make these dots, or these marks, or these lines, something happens. I don’t know how it works—but yes, I guess I do believe this love goes out beyond this hand, this brush, this painting, this museum, and it travels, somehow, through the air to reach those who aren’t feeling loved. Now I am sounding all Northern Californian, suddenly. The truth is that I don’t know how love works. But I know that when I practice it and make it real, here on the painting, love is moving from me into the world through my intention.” She was walking to the edge now, and I could see that she was almost ready by the way she was being, in her body. The field around her was rippling, almost popping with aliveness. “Sounds quantum,” I said. “And yes, love does move that way. And yes, I could see how that could form a belief, through practice.” She was nodding yes. She backed up from the painting, taking in the magnitude of it. The symbol of her creativity, the path of her healing, the big bang interpreted as co-creation with two eggs, the fiery field egg from which she emerged, the stars the day she was born, and the human form, stretching into the cosmos, the map of over thirteen billion years, according to this artist was a kind of magnum opus at this point on the path. Now I was nodding yes, too. “Giving up certainty is kind of like a new holy ground for me. I am standing here just now, after all of this time. I have made a stardust transit to this moment, right here.” “Have you arrived, then?” I ask. “I believe I have. And now that I am here, I am leaving again.” Tears came to her eyes. I watched as she took the tears and pressed them into the stars from the day she was born right onto the surface of the painting. I imagined her salt going back into the stars. Making contact. “Say more,” I urged her, hoping she would make the complete transit. “Well. My family left the continent of Africa close to 70,000 years ago. The mitochondrial DNA of our matriarch ends with me since no child will bear this helix of continuity. I am the end of the line. There will be no more of our kind, our kin, our story of arrival on earth. My family, we are completing. This painting revealed that to me. I was in my grandmother when my mother was in her womb, and now I stand, becoming an Elder myself, with no stardust to pass on from my own body.” The gravity in her voice was palpable. There was deep sorrow, but also something else. I moved closer, creating an invitation with my body language. “This painting completed my stardust transit. I have arrived, only to leave again in some other form—but not DNA, not lineage.” She paused, as if having an idea that hadn’t quite formed yet. “I have a lineage of dots.” “Indeed, you do.” “I guess it is human to hope you continue in some way—but I am also giving that up now. I . . . I . . . I . . . have loved. Not as big as I wanted, but bigger than I thought was possible. I risked having my heartbroken and went in willingly to that place where some of us never return. I gave myself to love, to be shaped as a human being.” “Don’t spend your days now thinking of what you didn’t do, or how your DNA is completing,” I said. “What matters is that you brought forth something true from your ancestors, and that is enough.” I gave her a half-smile, encouraging her to not dwell in the sorrow I felt as she spoke. “Something true?” she asked, her eyes brightening. For that was all she ever wanted—something true. “What do you think it is?” “I gave the gift that was in me in this lifetime. Which feels like it has been developing for such a long time. As if, somehow, around 7,000 years ago, our people forgot about this gift, and I was able to pick up the trail again and share it. Is that right?” “Yes, that is something true,” I said. “Most humans don’t spend their lives living a gift. Circumstance, heartbreak, family conditions—you name it—keep them from living it out. At times, the carried gift that is the most important thing feels as if it is insignificant when it is the most precious of all things. The most precious of all things never belongs to just one person. If anyone tries to keep it for themselves, the gift just moves onto another destination. Does your gift have a name?” “I call the gift Intentional Creativity. When we make with love, what we make changes and is charged with love. That love returns to us and also moves out into the world.” “You said ‘we’ again. Who is ‘we’?” “All of those who make the dots—we are the lineage of dots. This is way more than just me, but I was somehow available to bring this through. I have dedicated the past twenty-seven years of my life force to bringing it out. Never moving my feet off the path.” “Has it been hard?” “You bet. There were a lot of things I wanted to do: have a family, write books, run art galleries in New York, Paris, San Francisco, develop my own painting—but more than any of those things, I wanted to end violence against women. And that kept me going. I did not accomplish that, however. The violence seems worse than ever.” “Precious soul, you thought at first, you were here to end violence against women inflicted upon them by others,” I said. “And that has been your spark of passion, and even a drive, to tend the thing that you most loved, safety and honor for women. This desire moved you forward on the path, though the goal remained ever elusive. Because, this violence will not be going away for some time as you now, know. Yet your deepest heart call was different than that—we could not tell you all the details, then. We just don’t work that way with full disclosure. You just had to accept the assignment and run with it. We tried to help you as you went along. And you actually did fulfill your assignment.” I looked at her directly for the first time. Her eyes were clear and green, seeing what I was saying on some unseen level. I thought I could see her feeling relief. She was letting it register in her whole body that maybe she had done the work she was put here to do. Afterall. “What then, have I accomplished?” She asked looking curious and seeking confirmation. “The sacred assignment you were given was to bring through the gift—that which you call Intentional Creativity. The particular charge of this station that you have been fulfilling was bringing the teaching of ending violence within. If we do not end the violence within, we ourselves will be violent to ourselves, or others, or allow it to happen to us. The first awakening is to end the violence within. Everything else is either shadowed or illuminated from that perspective.” “I see,” she said, smiling broadly now, without hesitation in her eyes. I watched as it appeared as though a storm was clearing from her face. “How do you really feel—in the innermost chamber of your being?” I asked “I feel deep, abiding joy. An unreasonable joy that is not connected with any particular thing. Yet knowing that the assignment has been carried out, and can be shared, is a relief.” “There is something else I want to tell you,” I told her as I measured my tone of voice, softening while preparing for what came next. “More is being requested of you—I know you feel really done. But there is a new assignment available for those who have completed stations like yours. It may not be an elevation or a promotion—sorry, it isn’t a hierarchy here. This may require even more than what you have already given.” I watched her back away slowly, but I did not lose her gaze. We were in a moment of transition that many souls don’t even arrive at, let alone don’t get to consider. The work beneath the work. “I don’t know,” she said. “I am tired right now. I want to enjoy my newly found revelations that this is the end of the line, I brought the gift, though it was different than I thought, and there is a lineage of dots. Good ’nuf for now . . . isn’t it?” “You always have a choice. Even when you don’t think you do—that is a choice, too.” “I am dreaming of tropical islands and piña coladas and a bikini big enough even for the likes of me.” She laughed then, and so did I. The laughter between us felt good, like a uniting moment in this story of her stardust transit. “Well. When you are ready, we will be here. Ready for you” I said, understanding that cosmic requests do take some time to agree to. “Who is ‘we’?” she asked, but I had already disappeared back into where I came from. A conversation with my soul.
1. Landau, Mark, et al. Changes in nerve conduction velocity across the elbow due to experimental error. Muscle and Nerve. 26: 838-840. (2002). One diagnostic criterion for ulnar nerve mononeuropathy at the elbow (UNE) is a decrease in across-elbow nerve conduction velocity (NCV) > 10 m/s compared to the forearm segment. Distance and latency measurement errors are an inherent part of NCV calculations. Twenty electromyographers measured the latencies of stored ulnar compound muscle action potentials and measured the forearm and across-elbow distances along the ulnar nerve. Based on previously published equations, experimental error in NCV was calculated for various NCV’s. The mean distances and standard deviations for the forearm and elbow segments were 212.5 ± 2.1mm and 86.7 ± 4.2mm, respectively. For an NCV of 55 m/s, a difference of 14 m/s between the two segments can occur from measurement error alone. Distance measurements about the elbow are fraught with interobserver errors rendering the resultant NCV of that segment of limited value as a sole criterion for the diagnosis of UNE. The effects of chronic mercury intoxication on urinary markers in workers from northeast Algeria were investigated. Workmen were chosen from highly and moderately mercury-exposed factories, while controls were selected from a non-exposed site. The number of proteinuria cases was higher in the highly exposed subjects, although the nature (glomerular or tubular) of proteinuria remains unclear. However, it appears difficult to assess the degree of renal disturbance among the different exposure levels, such as the amount of excreted proteins, which have not clearly reflected the kidney lesion severity. The results also reveal that urine acidity increased progressively with increased levels of exposure, while a remarkable inverse relationship between urinary pH and urinary Hg in the highly exposed workers has been recorded. Furthermore, the significant differences in blood and urinary mercury concentrations of the three sites reflect the dose-response relationships. The paper presets a summary of the literature published until December 2000 on effects from some industrial chemical exposures on color perception, as well as short descriptions of the tests applied. Several different tests have been used to study acquired alterations of color vision. These changes are frequently found in the blue-yellow axis. Many of the tests were originally designed to detect congenital alterations in the red-green axis, and thus have relatively low sensitivity when studying chemically induced deficits in color perception. At present, the Lanthony D15-desaturated panel seems most suitable to applications in industrial settings, since it is clearly the most sensitive and easily administered test. Color vision seems to be a physiological function very sensitive to several chemicals. The potency of industrial chemicals to induce color vision deficiencies has often been investigated during the last two decades. The chemicals most frequently studied are different solvents and mercury. Pronounced effects on color perception have been reported following chronic exposure to organic solvents such as styrene, carbon disulphide, perchloroethylene, n-hexane and solvent mixtures, and to organic as well as inorganic mercury. The effect of occupational toluene exposure seems not as well established, since only slight effects and several negative studies have been reported. For some of these compounds the effect on color vision has been further established through the finding of clear dose-effect relationships. In a few cases, even acute exposure situations, e.g. exposure to toluene for a few hours or acute alcohol intake, seem to affect color perception. Follow-up studies are needed to investigate the possible reversibility of effects in relation to discontinued or reduced exposures. 4. Iregren A, Andersson M, Nylen P. Color vision and occupational chemical exposures II. Visual functions in non-exposed subjects. NeuroToxicology. 23: 735-745 (2002). This paper presents data on visual functions (visual acuity, contrast sensitivity, and several tests of color vision) in a group of 199 non-exposed healthy subjects with an even distribution over the age range 18-65 years, and sex. Although subjects with obvious congenital color vision deficiencies were removed from the analyses (four males), females were superior to males on several of the color vision tests applied. Age influenced visual acuity and contrast sensitivity, while color discrimination was less affected. Correlations between functions of the right and the left eye in the individual subjects were rather low, ranging from 0.40 to 0.73. Correlations between visual acuity and contrast sensitivity on the one hand and color discrimination on the other hand were still lower (r<0.20). These low correlations between functions in the two eyes support the need for testing each eye separately. In a cross-sectional case-control study conducted in northern Italy, 64 former aluminum dust-exposed workers were compared with 32 unexposed controls from other companies matched for age, professional training, economic status, educational and clinical features. The findings lead the authors to suggest a possible role of the inhalation of aluminum dust in pre-clinical mild cognitive disorder which might prelude Alzheimer’s disease (AD) or AD-like neurological deterioration. The investigation involved a standardized occupational and medical history with particular attention to exposure and symptoms, assessments of neurotoxic metals in serum: aluminum, copper and zinc, and in blood: manganese, lead, and iron. Cognitive functions were assessed by the MMSE, the Clock Drawing Test, and auditory evoked Event-Related Potential (ERP-300). To detect early signs of mild cognitive impairment, the time required to solve the MMSE and CDT were also measured. Significantly higher internal doses of Al in serum, and Fe in blood, were found in the ex-employees compared to the control group. The neuropsychological tests showed a significant difference in the latency of ERP-300, MMSE score, MMSE-time, CDT score and CDT-time between the exposed and the control population. P300 latency was found to correlate positively with serum Al and MMSE-time. Serum Al has significant effects on all tests: a negative relationship was observed between internal Al concentrations, MMSE score and CDT score; a positive relationship was found between internal Al concentrations, MMSE-time and CDT-time. All the potential confounders such as age, height, weight, blood pressure, schooling years, alcohol, coffee consumption and smoking habit were taken into account. These findings suggest a role of Al in early neurotoxic effects that can be detected at a pre-clinical stage by P300, MMSE, MMSE-time, CDT-time, and CDT score, considering a 10 µg/L cut-off level of serum Al, in Al foundry workers with concomitant high blood levels of Fe. The authors raise the question whether pre-clinical detection of Al neurotoxicity and consequent early treatment might help to prevent or retard the onset of AD or AD-like pathologies. Evidence discussed in this review article lends strong support in favor of an etiologic role of environmental factors in Parkinson’s disease. First, thanks to the discovery of MPTP (1-methyl-4-phenyl-1, 2, 3, 6-tetrohydropyridine), it is now clear that, by targeting the nigrostriatal system, neurotoxicants can reproduce the neurochemical and pathological features of idiopathic parkinsonism. The sequence of toxic events triggered by MPTP has also provided us with intriguing clues concerning mechanisms of toxicant selectivity and nigrostriatal vulnerability. Relevant examples are i) the role of the plasma membrane dopamine transporter in facilitating the access of potentially toxic species into dopaminergic neurons; ii) the vulnerability of the nigrostriatal system to failure of mitochondrial energy metabolism; and iii) the contribution of inflammatory processes to tissue lesioning. Epidemiological and experimental data suggest the potential involvement of specific agents as neurotoxicants (e.g. pesticides) or neuroprotective compounds (e.g. tobacco products) in the pathogenesis of nigrostriatal degeneration, further supporting a relationship between the environment and Parkinson’s disease. A likely scenario that emerges from our current knowledge is that neurodegeneration results from multiple events and interactive mechanisms. These may include i) the synergistic action of endogenous and exogenous toxins (e.g. the ability of the pesticide diethyldithiocarbamate to promote the toxicity of other compounds); ii) the interactions of toxic agents with endogenous elements (e.g. the protein a-synuclein); iii) the tissue response to an initial toxic insult; and, last but not least, iv) the effects of environmental factors on the background of genetic predisposition and aging. A case is described of an experimental physicist who developed parkinsonism, apparently as delayed toxic effect of long exposure to vapors of methanol in the laboratory. Clinical and magnetic resonance imaging (MRI) supported the diagnosis, after exclusion of hereditary diseases and primary degenerative diseases. Screening for heavy metals in urine and plasma ceruloplasmin was negative. This case illustrates the neurotoxic delayed effect of long-term exposure to methanol with no episodes of acute intoxication. The setting of a research laboratory with prolonged exposure to mixed single crystals and inhalation of methanol vapors may exist in other academic and hi-tech environments, and pose the risk of similar delayed toxic influences. The cellular and molecular site and mode of action of acrylamide (ACR) leading to neurotoxicity has been investigated for four decades, without resolution. Although fast axonal transport compromise has been the central theme for several hypotheses, the results of many studies appear contradictory. Our analysis of the literature suggests that differing experimental designs and parameters of measurement are responsible for these discrepancies. Further investigation has demonstrated consistent inhibition of the quantity of bi-directional fast transport following single ACR exposures. Repeated compromise in fast anterograde transport occurs with each exposure. Modification of neurofilaments, microtubules, energy-generating metabolic enzymes and motor proteins are evaluated as potential sites of action causing the changes in fast transport. Supportive and contradictory data to the hypothesis that deficient delivery of fast-transported proteins to the axon causes, or contributes to, neurotoxicity are critically summarized. A hypothesis of ACR action is presented as a framework for future investigations. 9. Merlevede K, Theys P, and Van Hees J. Diagnosis of ulnar neuropathy: a new approach. Muscle and Nerve. 23: 478-481 (2000). Conventional electrodiagnosis used to detect an ulnar neuropathy at the elbow depends on accurate determination of ulnar nerve length across this segment. We present a new approach, using the difference in latency of the compound nerve action potentials (CNAPs) of the ulnar and median nerves elicited by stimulation at the wrist and recorded 10 cm above the elbow. Sixty normal controls were examined in order to determine the normal upper limit (1.4ms) of the difference in CNAP latency of the ulnar and the median nerves (Dlat index). Values obtained in 10 patients with ulnar nerve lesions are discussed. This test was shown to be both sensitive and specific, was independent of ulnar nerve length, and was easy to perform. 10. Swash M. What does the neurologist expect from clinical neurophysiology? Muscle and Nerve Supplement. 11: S134-S138. (2002). The future role of clinical neurophysiology is considered in the light of its achievements. It is argued that there is a need to develop methods for specific diagnosis, especially in neuropathies. There is also an unmet requirement for the development of techniques for the predication of treatment outcomes and for the measurement of changes during the natural history of neuromuscular disorders and their treatment. These issues are not addressed by currently available clinical test methods. 11. Carter N, et al. EUROQUEST – A questionnaire for solvent related symptoms: Factor structure, item analysis and predictive validity. NeuroToxicology. 23: 711-717. (2002). The study evaluates the factor structure and predictive validity of the symptom questionnaire EUROQUEST (EQ) that had been developed with the goal of simplifying the evaluation of health effects associated with long-term solvent exposure. The EQ was added to the normal evaluation procedures for 118 male patients with suspected solvent-induced toxic encephalopathy (TE) referred to seven Swedish clinics of occupational medicine during an 18-month period. EQ was also completed by 239 males from a random sample of 400 Swedish males aged 25-64 years selected from the general population, and a sample of 559 occupationally active male spray painters aged 25-64 years. Factor and item analyses of EQ responses were performed. Ordinary least square regression analysis was used to evaluate sensitivity and correlation to evaluate the specificity of EQ and the separate components. Questions concerning memory and concentration symptoms alone showed better sensitivity than the other five EQ dimensions singly or combined for the entire EQ and for a subset of questions approximating Q16, a widely used organic solvent symptom screening questionnaire. However, the diagnosis of TE required information in addition to exposure and responses to EQ and Q16-like questions. The results indicate that the subset of EQ questions concerning memory and concentration might replace the more cumbersome EQ and less sensitive Q16 in screening for TE, although none of the screening instruments alone replaces current clinical diagnostic procedures. John Engstrom, MD, claims this book to be an invaluable reference to the clinician or physician in training. A user-friendly, well-organized, comprehensive resource, the book is divided into three sections. The first 35 chapters present the signs and symptoms of neurologic disease. The next 18 chapters discuss the use of laboratory investigations and the utility of subspecialty consults to neurology. The last section discusses specific neurologic diseases. An associated website, , provides text references, updates, links to related sites, and a discussion forum. Engstrom concludes that the text is an essential part of every medical library. Muscle and Nerve. December 2001. pp 1897-98. Experimental and Clinical Neurotoxicology (2nd ed). Edited by P.S. Spencer. New York: Oxford University Press, 2000. Dr. Michael Aminoff claims this text is an invaluable resource in those interested in neurotoxicology. Introduced by a discussion of neurobiological principles in relation to medicine, the book is comprised of an alphabetical listing of neurotoxic chemicals, a summary of their biological/clinical effects, and a rating of the strength of association between them. Two appendices are provided. The first is an alphabetical list of each compound and its neurotoxicological rating. The second lists specific neurological disorders and their associated neurotoxins. Neurologic Catastrophes in the Emergency Department, by EFM Wijdicks. Boston: Butterworth-Heinemann, 1999. As reviewed by Dr. J. Hemphill, the book is a valuable resource for the assessment, triage, and acute management of critically ill neurologic patients. Wijdicks, the director of the Neurological/Neurosurgical Intensive Care Unit at St. Mary’s Hospital and Mayo Medical Center, provides an informative, user-friendly reference for physicians ranging from emergency to neurosurgical specialties. The first section describes evolving catastrophies in the neuroaxis. The second section outlines catastrophic neurological disorders due to specific causes. The book is well organized with highlights of important points and is claimed to be an excellent text for physicians of many specialties. 13. Landrian, Philip. A Year of Passage. American Jo of Industrial Medicine 38: 483-484 (2000). A letter from the editor pays homage to three leading figures in the field of occupational/industrial medicine who passed on in 1999. David Platt, director of the NIEHS from 1971-1990, supported the training of scientists, conducted important toxicological studies, and founded a network of university centers in environmental health. He advanced science and the careers of young scientists both nationally and abroad. John Goldsmith was a reknowned environmental epidemiologist with expertise in air pollution, biostatistics, and respiratory disease. He served as president of the ISEE, as a consultant to the WHO and the NIEHS, and on the faculty of medicine at Ben-Gurion University of the Negev in Israel. Janusz A. Indulski was a leading figure in occupational medicine in Poland, having founded the Lodz Medical Academy, the School of Public Health at Lodz, and the Polish Society of Social Medicine. He founded the Polish Journal of Occupational Medicine, and acted as director of the Polish Institute of Occupational Medicine, a consultant to the WHO and to governments of multiple countries on topics of occupational and environmental health. Successors to these three leaders carry on their legacy, several of whom now serve on the board of the American Jo of Industrial Medicine: Lynn Gloldman, Gegory Wagner, David Goldsmith, Maureen Hatch, Anne Golden, Franklin Mirer, and Sylvia Johnson. 1. Xiao J, and Levin S. The diagnosis and management of solvent-related disorders. American Jo of Industrial Medicine. 37: 44-61 (2000). Xiao and Levin provide a valuable review of occupational/industrial exposure to organic solvents, discussing both the acute and chronic effects upon health. This includes CV, CNS and PNS systems, renal and liver function, skin, mucous membranes, reproductive organs and development. Specific attention is given to benzene, toluene, xylene, styrene, n-hexane, trichloroethylene, tetrachloroethylene, trichloroethane, carbon disulfide, methyl ethyl ketone, and methyl-n-butyl ketone. With support from the current literature, chemical structure, industrial use, metabolism, symptoms of exposure, pathological changes, acute and chronic health effects, treatment and management are presented. Suggestions for biological/environmental monitoring and worker protection are provided. Emphasis is placed upon intervention by industrial hygienists and worker/employee education in exposure reduction techniques. 2. Herbert R, Gerr F, and Dropkin J. Clinical evaluation and management of work-related carpal tunnel syndrome. American Journal of Industrial Medicine. 37: 62-74 (2000). This well documented article presents the clinician with a cohesive review of the etiology, diagnosis and treatment of work-related carpal tunnel syndrome (WRCTS), one of the most costly and disabling occupational musculoskeletal disorders. Key aspects of the medical history, physical exam, and laboratory evaluation are outlined. Patients may present with pain, numbness and/or tingling in the distribution of the median nerve, and weakness of the APB in advanced cases. Electrodiagnostic studies are the gold standard test for CTS. Prolonged distal motor or sensory latencies of the median nerve, slowing of the median sensory conduction velocity, and denervation of the APB are suggestive of CTS. Risk factors include forceful, repetitive, or vibratory movements of the hands, as well as systemic illness such as collagen vascular disease or renal failure. The literature supports a positive surgical outcome for severe CTS cases, while mild to moderate cases may be more effectively treated with conservative methods. WRCTS cases show poor outcomes for recovery in comparison to non-occupational cases. Emphasis is placed upon early workplace intervention and application of ergodynamic modifications. Such intervention is key to healing and faster return-to-work rates. Engineering and administrative controls must be instigated to reduce worker exposure to inciting movements. The high cost and burden of CTS in the workplace validate the need for future research and intervention in this area. The MA Dept of Health instituted a study to identify occupations and industries with high incidence of work-related carpal tunnel syndrome (WRCTS). Data from 1992-97 were drawn from the surveillance study and were evaluated for frequency and distribution of cases. In addition to targeting worksites with high incidence of WRCTS, the study sought to evaluate the effectiveness of the MDPH surveillance system in targeting specific worksites for intervention. Strengths of the study include the identification of cases through both workers’ compensation records (WC) and physician reports (PR) to the MDPH, rather than reliance upon a single source. Cases identified from PR and WC were compared with respect to demographics and employment characteristics. Overall, 4800 new cases were identified, at a rate of 4 cases per 10 000 workers. A high proportion was under 25 y.o., and female. The largest number and rate of WRCTS cases were identified in the manufacturing sector. High rates were also seen in technical/administrative operations using video display units. The combined use of PR and WC reports identified far more cases in MA than had been indicated by either system alone. The study is limited by underreporting by both workers and physicians. High numbers of reported cases for a given worksite reflect either a high prevalence of risk factors, or simply better reporting rates. While physician reports help identify establishments where intervention is needed, WC reports are more reliable for targeting specific occupations and industries. Future research is needed to identify factors influencing reporting rates. Improvements in reporting will allow for more accurate assessment of the costs incurred by WRCTS upon society 4. Spinner R, and Kline D. Surgery for peripheral nerve and brachial plexus injuries or other nerve lesions. Muscle and Nerve. 23: 680-695. (2000). This neurosurgical review from Louisiana State University Medical Center provides the clinician with a clearly-outlined approach to the patient with nerve injury. In a succinct, yet comprehensive manner, the authors describe key aspects of the clinical evaluation and appropriate electrodiagnostic/radiographic studies including EMG, NCS, x-ray, CT myelogram, and MRI, for various cases. Valuable diagnostic questions are provided. A discussion of surgical versus conservative treatment includes timing factors, contraindications to surgery, and details of operative procedures. Early surgery is advised for open lacerations involving clean nerve transactions. A delay is suggested for bluntly transected nerves, or lesions-in-continuity. Advanced microsurgical techniques and introperative electrophysiologic measures such as nerve action potentials (NAP) have led to more accurate diagnoses and positive surgical outcomes. The operative procedures discussed include internal and external neurolysis, split repair, end-to-end epineurial repair, graft repair and nerve transfers. An algorithm for surgical management of nerve injury, tables, diagrams, and photos of surgical procedures are excellent supplements to the text. Post-operative management and results are outlined, with specific attention to nerve entrapments, peripheral nerve tumors, injection injuries, obstetric palsies and pain syndromes. Optimisation of surgical outcomes depends upon accurate evaluation, with identification of the pattern of deficit, type of lesion, and severity. Improved imaging (MR neurography) may reduce the need for future exploratory surgery. Research into the neurobiology of nerve injury and regeneration will foster treatments with less scarring and more rapid, effective healing. Neurotrophic factors and fibroblast products play will play important roles in regenerative techniques. Outcome studies of current nerve and muscle transfer procedures are recommended. This is a case report of a 59 year old female custodial worker presenting to an occupational health clinic with longstanding multiple chemical sensitivities (MCS), a left visual field deficit and papilledema. Her symptoms manifested upon exposure to low level chemical odors particularly cleaning products, and included headache, visual and auditory problems, dermatitis, musculolskeletal pain, fatigue and weakness, all progressing over a 12 year period. MRI revealed a 6 cm right occipital lobe tumor. Following surgical resection of the meningioma her visual deficit and multiple symptoms continued to manifest upon exposure to a variety of normally harmless odors. The case closely fits the working definition of MCS proposed by Cullen: gradually progressive multisystem symptoms, particularly neurological and psychiatric, but often respiratory or GI, in response to a low dose environmental exposure, with no underlying cause identifiable upon evaluation. The authors present plausible explanations for the patient’s symptoms. Mass lesions and increased ICP may cause headache, global neurological deficits, mental and psychiatric problems. The concepts of kindling and time-dependent sensitisation are proposed mechanisms for the patient’s increased sensitivity to cleaning compounds over time, potentially triggering a physiologically conditioned response. The case is a reminder that patients with MCS must be closely evaluated for treatable intracranial lesions affecting olfaction. This patient’s multisystem symptoms persisted following removal of the tumor, suggesting a conditioned response or psychological etiology was at hand. 6. Osterber K, et al. A comparison of neuropsychological tests for the assessment of chronic toxic encephalopathy. American Journal of Industrial Medicine. 38: 666-680 (2000). The aim of this study was to evaluate if the current test battery for detection of solvent-induced toxic encephalopathy (TE) can be improved by utilization of new tests for complex attention and frontal lobe function. Assessment of the sensitivity and specificity of the test battery was another goal. From 1995-96, this Swedish study investigated 2 groups of men previously diagnosed with TE (5 years daily occupational solvent exposure with subsequent cognitive and emotional symptoms) and a reference group. All TE subjects were classified at 2A or 2B according to prior test outcomes. All had irreversible mixed symptoms, while only 2B suffered cognitive reduction. The three groups were examined with both traditional TE tests and the newer tests for complex attention and frontal lobe function. Results showed most new tests to be less sensitive to TE than many traditional tests. Compilation of an optimised test battery including both traditional and newer tests resulted in a sensitivity and specificity of 0.7. The authors discuss sensitivity and specificity of each traditional test, reproducibility of test profiles and strategies for clinical differentiation between 2A and 2B subjects. Because of the variability in functional domains affected in TE, variation in cognitive performance across tests is to be expected. Clinicians are reminded that no test profile is diagnostic of TE and, therefore, clinical evaluation with the optimised test battery, combining various traditional and newer tests, is recommended. 7. Fairfield K, and Fletcher R. Vitamins for chronic disease prevention in adults. JAMA. 287: 3116-3126 (2002). Using data obtained form a Medline search, the authors conducted a review of nine clinically important vitamins. Dietary sources, biological activity, deficiency syndromes, relation to chronic disease, and toxic effects are discussed. While vitamin deficiency is not prevalent in North America, sub-optimal intake is, and has been linked to elevated risk for chronic disease such as cancer, CHD, and osteoporosis. Certain groups are at increased risk for insufficient vitamin intake or absorption, including the elderly, vegetarians, pregnant women, hospitalised patients, and those affected by alcoholism or malabsorption. While supplementation is useful, care must be taken to avoid an overdose of fat-soluble vitamins. Sub-optimal levels of vitamins B6, B12 and folate may lead to elevated homocysteine levels, and therefore increased risk for CHD. Both B6 and B12 are required for nervous system function. Low folate is also associated with aberrant DNA synthesis and neural tube defects. Vitamin E and lycopene supplementation may decrease risk of prostate cancer. Vitamin D, in association with calcium, has been shown to decrease fracture incidence. It is the responsibility of the physician to inquire about a patient’s knowledge surrounding vitamins and use of supplements, and to counsel patients about dietary sources of vitamins. Scientific evidence for the benefits of vitamin supplementation is not well established and much information has been derived from observational studies. The authors present the clinician with a discussion of vitamin supplementation in the general population. Sub-optimal levels of vitamin intake are very prevalent in the United States, and present risk factors for chronic disease. Supplementation has been shown to reduce the rate of clinical events. Folate supplementation during pregnancy is associated with decreased neural tube defects, and vitamin D with calcium supplementation has been shown to reduce fracture incidence. The authors present three options for correction of sub-optimal vitamin intake. First, physicians are advised to counsel patients about dietary improvements, as food provides the most bioavailable source of vitamins. Second, is the addition of vitamins to commercial foods. Third, is vitamin supplementation. All adults are recommended to take a single multivitamin per day. Supplemental folate, B6, B12, and vitamin D help prevent cardiovascular disease, cancer and osteoporosis. Additional B12 and D are recommended for the elderly, and folate for women who may become pregnant. Over-supplementation of vitamin A and iron are possible in certain populations. Readily available multivitamins are inexpensive and generally equivalent across brands. The authors discourage commercial testing for vitamin deficiencies in the general population, but rather encourage physicians to ask patients about vitamin use and to be aware of a patient’s potential fear that physicians may disapprove of supplementation. Websites providing links to evidence-based information regarding vitamin supplementation are cited. This article presents a synopsis of the information presented during the 2001 conference in Colorado, sponsored by the University of Arkansas for Medical Sciences and the NIEHS. The multifaceted aspects of Parkinson’s disease (PD) are outlined. Progression of tremor and impaired gait, balance, co-ordination, and proprioception occur most often after age 55. Symptoms manifest when deterioration of the substantia nigral cells has led to 80% depletion of dopamine, the neurotransmitter responsible for smooth motor control by the motor cortex. Risk factors for PD include viral infections, high levels of dietary fat, depression, head trauma, elevated iron levels, and exposure to hydrocarbon solvents, Mn, Pb, Cu, Fe, and pesticides such as paraquat, heptachlor or maneb. The existence of Lewy bodies has not yet been determined to be a cause, or effect, of PD. Future research is directed at identification of specific gene mutations common to the PD phenotype and defining the role of the Lewy body. There is a call for the combined expertise of epidemiologists, geneticists, and those working in the basic sciences. Future funding should be directed towards identifying pathological symptoms of PD, gene profiling, and outlining the cellular mechanisms of neurodegeneration. The late onset of PD and lack of PD registries, diagnostic tests, or biomarkers pose difficulties to epidemiological studies. A promising nested case-control study is currently examining aspects of PD among farmers potentially exposed to pesticides, metals and Nocardia. Twenty-four PD-related projects are currently supported by the NIEHS. A consortium has been established to study the gene-environment interaction in PD, and is intended to facilitate communication between clinicians, geneticists, epidemiologists, and researchers, and support translation of lab discoveries to the clinic. 1. Bast-Patterson R, Skaug V, Ellingsen D, and Thomassen Y. Neurobehavioral performance in aluminum welders. American Journal of Industrial Medicine. 37 : 184-192 (2000). This cross-sectional study of 20 aluminum workers (mean exposure 8.1 years) and 20 controls examined possible neuromotor/behavioral effects of aluminum exposure. Prior studies have shown impaired motor function, neuropsychiatric symptoms, and delayed reaction times. Current exposure was assessed by sampling of air within respiratory protection devices and urinary Al levels. Testing included screening for neurological symptoms, memory and attention, hand steadiness and reaction times. Welders reported significantly more symptoms than controls. No clinically significant tremor was apparent in either group. Welders performed significantly better than controls in tests for hand steadiness and reaction times. However, there was a significant association between duration of exposure and impaired performance in tremor tests, as well as longer reaction times and Al in air. Results indicate welders are not clinically impaired in steadiness or reaction time. The high performance of welders may be attributable to a job-related training effect or selection of steady-handed workers to the welding profession. 2. Lo B, et al. Discussing religious and spiritual issues at the end of life. JAMA 287 (6) :749-754 (2002). A patient’s spiritual issues and religious concerns may become heightened near the end of life and affect their medical decisions about life-sustaining interventions. Communication of these concerns to the physician is of great importance to relieving patient distress and improving understanding between the patient and physician. This article presents physicians with valuable guidelines for addressing, and responding to, a patient or family members about such concerns. Three case scenarios are analyzed. Emphasis is placed on respectfully helping patients think through medical decisions, empathetic listening, posing of open-ended questions, and providing a supportive environment in which patients may find comfort and closure near the end of life. Previous studies have indicated that exposure to certain metals, herbicides and pesticides may lead to neurochemical changes and neuronal death in the substantia nigra, leading to development of Parkinson’s Disease. This population-based case-control study examined the relationship between PD and a range of occupations and industries. Subjects included 144 cases and 464 controls receiving primary care within the Henry Ford Health System between 1991-1994. Occupational histories were obtained by structured interviews and classified into 1 of 9 broad categories. Strengths of the study included the detailed method of job history classification for both cases and controls, and the use of well-defined coding systems. Results indicated an increased risk of PD with work experience in agriculture, fishing and forestry occupations, while experience in a service occupation was negatively associated with PD. This is the first case-control study to identify an inverse relationship between ever-working in the service industry and PD. Future studies should focus on validation of this protective effect among various populations, as well as the elevated risk of PD associated with agricultural industries. Attention to potential environmental exposures and common lifestyle factors within each occupational category is warranted. 4. Manzo L, et al. Assessing effects of neurotoxic pollutants by biochemical markers. Environmental Research 85 : 31-36 (2001). This article reviews the efficacy and limitations of current neurotoxicity biomarkers. Toxic biochemical changes often occur at the cellular/subcellular level before overt nervous system disease manifests. These changes may serve as markers of neurotoxicity in exposed populations. Complexity of nervous system functions, the multistage nature of neurotoxic events, and inaccessibility of target tissues pose obstacles to the use of biomarkers. However, new techniques have been established to assess exposure, effect and susceptibility to neurotoxic disease. Surrogate markers for nervous tissue are found in peripheral tissues. Presently, ALAD, AchE, and MAO are used as biomarkers of exposure to lead, organophosphates, and Mn/styrene respectively. The use of peripheral biomarkers is limited to agents whose mechanism of action is known, and is based upon the assumption that peripheral tissues have the same pharmacological properties as nervous system targets, and therefore are affected by neurotoxins in a similar way. This must be validated by mechanistic studies. Differences in dose-response and adaptation may occur. Markers detecting early neurotoxic changes may be effective as tools for identifying subclinical disease states and initial neurotoxic effects. Their development could play a critical role in occupational and preventative medicine. In combination with neurobehavioral and physiological assays, biochemical markers may lead to more precise human neurotoxicity assessment, especially in chronic, low dose exposures. 5. Apostoli P, Lucchini R, Alessio L. Are current biomarkers suitable for the assessment of manganese exposure in individual workers ? American Jo of Industrial Medicine 37 : 283-290 (2000). The rising industrial and agriculture use of Mn has given rise to heightened interest in the chronic effects of low-dose exposure. To assess health effects, a precise means of quantifying Mn exposure is required. Currently estimates of external and internal dose are based upon airborne [Mn] and whole blood/ urine [Mn] respectively. This study examined the relationship between airborne Mn levels and Mn in blood and urine in a group of Italian feroalloy workers. Biological and environmental monitoring of Mn was conducted among 94 exposed workers and 87 controls, and a cumulative exposure index (CEI) was calculated. Generally blood Mn for subjects was twice that of controls, and urine Mn was five times as high. Blood Mn reflected total body burden and was linearly related to Mn in air. No association between CEI and Mn levels were observed. Mn B and MnU may effectively indicate exposure on a group basis, but due to high variability, are not suitable as biomarkers of individual exposure. Further research with attention to precise analytical techniques is required to determine more accurate biomarkers for Mn exposure. 6. Dietz M, et al. Results of magnetic resonance imaging in long-term manganese dioxide-exposed workers. Environmental Research 85 : 37-40 (2001). A cross-sectional study of 11 battery factory workers (mean 10 yrs exposure) and 11 controls was conducted to assess the relationship between Mn exposure and pallidus signal intensity on MRI, neurophysilogical and motor performance, and biological levels of Mn. Exposure was assessed via ambient air monitoring and biomonitoring of hair, blood and urine. A cumulative exposure index as calculated. Subjects underwent standardized interviews and clinical exams with neuophysiological testing. MRI evaluation allowed for calculation of the Pallidal Index (PI), the ratio of signal intensity in the global pallidus to subcortical frontal white matter multiplied by 100. Exposed subjects had higher mean blood Mn, with no significant signs of Parkinsonism or neurophysiological deficits. However, significantly increased signal intensity in the globus pallidus was observed, and is proposed to be due to paramagnetic properties of Mn. There was a positive correlation between CEI and blood Mn, as well as CEI and PI, for males only. Controlling for gender differences, future studies must focus on determining normal values for PI, and the health outcomes of elevated PI levels including Parkinsonism and neurotoxic effects. PI in T1-weighted MRI studies was found to be an effective marker of Mn exposure. 7. Kishi R, et al. Effects of low-level occupational exposure to styrene on color vision : dose relation with a urinary metabolite. Environmental research. 85 : 25-30 (2001). This study examined the threshold effects of chronic styrene exposure on color vision. Mandelic acid in urine served as the biomarker of exposure for 105 styrene workers (mean exposure 6.2 years) and 117 controls. Work history, solvent exposure, alcohol and drug use data were obtained via standardized questionnaires. Color vision was assessed by Lanthony desaturated panel D-15 testing and a color confusion index (CCI) was calculated. A significant difference was observed in the CCI between both medium and high exposure groups and their age-matched controls. A dose-effect relationship was apparent for urinary mandelic acid and impaired color vision, particularly in the blue-yellow range. The threshold value for increased risk of color vision loss appears to be .1-.2 g/L. Color vision testing is recommended before and after occupational styrene exposure to facilitate detection of early toxic effects. 8. Noonan C, et al. Occupational exposure to magnetic fields in case-referent studies of neurodegenerative diseases. Scandinavian Jo of Work and Environmental Health 28 (1) : 42-48. This study examined the relationship between occupational magnetic field exposure and Alzheimer’s disease, amyotrophic lateral sclerosis, and Parkinson’s disease. Case referent sets were created from death certificates for males in Colorado between the years of 1987-1996. These were analyzed according to three methods of exposure assessment : electrical vs non-electrical occupations ; a tiered grouping of potential magnetic field exposure based upon job title and industry ; and estimates of mean magnetic field exposure values from a job-exposure matrix. While there were no consistent observations of elevated risk for either Alzheimer’s disease or ALS, PD did show a positive association to magnetic field exposure with all 3 assessment methods. The study is limited by variations in definitions of high exposure job titles between the three exposure assessment methods, and the inherent inaccuracy of data obtained from death certificates. Furthermore, no duration of exposure was documented. The authors conclude that findings for an association between magnetic field exposure and neurodegenerative disease are sensitive to the method of exposure assessment used. The positive association between magnetic exposure and PD requires verification in future studies. 9. Goodman M, et al. Neurobehavioral testing in workers occupationally exposed to lead: a systematic review and meta-analysis of publications. Occupational and Environmental Medicine. 59 : 217-223. The blood concentration at which lead toxicity manifests has not yet been determined. This meta-analysis of occupational studies evaluated the association between results of neurobehavioral testing and moderate blood lead levels. A Medline search produced 22 studies meeting inclusive criteria, including blood Pb concentrations less than 70µg/dl. Results indicated that available data is inconsistent and provides inconclusive information on the neurobehavioral effects of moderate Pb exposure. Studies provided inadequate cumulative exposure/absorption data, and poor adjustments for age and pre-exposure intellectual function. A lack of uniform testing methods also existed. Meta-analysis results are very sensitive to changes in inclusion criteria and statistical methods. Prospective studies are required to compare neurobehavioral function before and after lead exposure. In determining occupational health regulations, the quality of scientific data for individual studies is more valuable than a pooled analysis of many studies. 10. Araki S, Sato H, Yokoyama K, Murata K. Subclinical neurophysiological effects of lead : A review of peripheral, central, and autonomic nervous system effects in lead workers. American Jo of Industrial Medicine. 37 : 193-204 (2000). This article presents an overview of 102 articles examining the subclinical effects of occupational lead exposure. Assessment of PNS effects focused on nerve conduction velocity (NCV)and distribution of NCV’s (DCV). Somatosensory, visual, and brainstem auditory evoked potentials served as measures of PNS function to CNS targets. Event-related potentials (P300) reflected cognitive function. ECG R-R interval (CVRR) variability was a useful assessment of ANS function. Vestibulo-cerebellar system effects were evaluated by posturography. Results indicate slowing of NCVs at lead levels as low as 30-40 µg/dL, and possible effects on the P300 latency, postural balance, and CVRR at similar levels. At 40-50 µg/dL, effects on DCV, somatosensory, visual, and brainstem auditory evoked potentials may occur. Children are more susceptible to lead toxicity than adults, and may develop deficits at lower doses. Cognitive function is particularly susceptible to lead toxicity at low exposure levels. Subclinical effects of lead on the nervous system have shown reversibility (chelation with CaEDTA). Zinc and copper may antagonize lead toxicity. Results show lead simultaneously affects peripheral, central, and autonomic nervous systems. Follow-up studies are required to determine the precise relationships between blood lead levels and neurophysiologic effects. 11. Robinson L. Traumatic injury to peripheral nerves. Muscle and Nerve. 23 : 863-873 (2000). Robinson comprehensively outlines the epidemiology and classification schemes for peripheral nerve injury, including the effects of each type of injury on associated muscles and nerves. The appropriate use and optimal timing of electrodiagnostic studies are discussed. Using ECG, CNAP, F and H waves, and SNAP measures, the degree of injury severity, prognosis and surgical necessity may be determined. Such measures are also useful in localization of pathology in cases of mixed injuries. Electrodiagnostic changes specific to each type of nerve injury are presented. Mechanisms of recovery are discussed in relation to determinants of prognosis, treatment, and surgical intervention. 12. Storzbach D, et al. Neurobehavioral deficits in Persian Gulf veterans : Additional evidence from a population-based study. Environmental Research. 85 : 1-13 (2001). The Portland Environmental Hazards Research Center undertook a population-based study to examine unexplained Gulf War symptoms such as problems with memory and attention, fatigue, and muscle pain. Cases (239) were drawn from veterans reporting symptoms and compared with 112 non-symptomatic GW veteran controls. A prior interim study showed a significant difference between cases and controls and identified a distinct subgroup of very slow responders in the Oregon Dual Task Procedures (ODTP) test. This study examined a larger sample of veterans to determine if the bimodal distribution identified in the interim study continued to exist, and to determine if “slow ODTP” veterans constitute a subgroup distinct from other GW vets with unexplained symptoms. Subjects completed mail-in surveys, clinical exams, and a battery of psychological and neurobehavioral tests. The slow ODTP group performed significantly worse in neurobehavioral testing than other cases and controls. Deficits in memory, attention, and response speed were identified. These results confirm that a bimodal distribution of cases exists and provided significant evidence that the majority of cases reporting behavioral symptoms had no objective evidence of neurobehavioral deficits. However, the larger group did show significantly elevated levels of psychological distress. The majority of significant differences between cases and controls in neurobehavioral performance are due to the presence of a small ODTP subgroup. This may represent a group at the lower end of health status prior to the GW experience. Further research should focus on this group, rather than all symptomatic veterans, to identify why neurobehavioral differences are present between cases and controls. This group provides the opportunity to discover the etiology of cognitive complaints, possibly involving neurotoxic exposures during the war. Studies should include brain imaging, EEG, and comprehensive neuropsychological exam. 1. Zhou W, Liang Y, Christiani D. Utility of the WHO Neurobehavioral Core Test Battery in Chinese Workers-A Meta-Analysis. Environmental Research. Section A 88: 94-102,2002. A meta-analysis assessing the efficacy of the WHO NCTB, summarizing measures of the tests, and aiming to determine the most sensitive subtests for specific exposure agents. Widespread use of NCTB in China made this an appropriate resource base. Following database searches and screening, 39 cross-sectional studies met inclusion criteria. Summary effect measure was estimated using the fixed-effect model. All subtests were found to be sensitive for detection of neurotoxic effects of occupational/environmental agents, yet specific subtests showed higher sensitivity for neurobehavioral changes induced by a given exposure agent. For Hg exposure, the Benton Visual Retention Test was most sensitive; for Pb exposure, Pursuit Aiming II and Profile Mood States showed highest sensitivity, and for organic solvent exposure, Digit Span, Pursuit Aiming II, and Digit Symbol were most sensitive. These subtests may not represent primary neurotoxic targets. Further study is required to clarify the relation between exposure and subtest results. Results may be valuable in streamlining test batteries for a given toxic exposure. The analysis is limited by application to a single ethnic group. 2. Lotti M. Low-level exposures to organophosphorous esters and peripheral nerve function. Muscle and Nerve. 25: 492-504, 2002. A critical review of studies examining the association of chronic low dose OPE exposure to development of peripheral neuropathy. In contrast to prior studies, recent data suggests peripheral neuropathy may develop in the absence of a preceding cholinergic toxicity, if low-level OPE exposure is experienced. An outline of the neurotoxic effects of severe OPE exposure is provided, including the cholinergic syndrome, intermediate syndrome, and organophosphate-induced delayed polyneuropathy (OPIDP). Considerable data from recent studies suggests peripheral neuropathy develops post low level OPE exposure. However, a lack of exposure data, mild or inconsistent changes in PNF, and small/poorly selected sample groups lead the authors to believe the data is inconclusive. Classic OPIDP provides the only available experimental model for comparison. The mechanism of low-level OPE induced peripheral neuropathy is unclear, as is the significance of the variety of findings in the reviewed studies. The authors claim some individuals may experience sensory deficits differing from the OPIDP pattern, but conclude there is no evidence that prolonged low-level exposure to OPE's leads to peripheral nerve dysfunction in humans. 3. Hwang K, Lee B, Bressier J, Bolla K, Stewart W, Schwartz B. Protein Kinase C activity and the relations between blood lead and neurobehavioral function in lead workers. Environmental Health Perspectives. 110 (2): 133-138, 2002. This study provides insight to the mechanism of lead toxicity in humans. It aims to determine if PKC activity is associated with neurobehavioral function, or if it acts as an effect modifier between blood lead levels and neurobehavioral test function. The cross-sectional study of 212 Korean current lead workers involved measurements of blood lead, PKC activity, and neurobehavioral function PKC activity was assessed by levels of phosphorylation of 3 erythrocyte membrane proteins, using an in-vitro back phosphorylation assay. The WHO NCTB test battery and several other tests measured neurobehavioral function. Controlled for confounding variables, results indicated elevated blood levels were associated with impaired performance in tests of psychomotor and executive functions and manual dexterity, only in subjects with lower in-vitro back phosphorylation levels, corresponding to higher in-vivo PKC activity. In-vivo PKC activity was not directly associated with decrements in neurobehavioral performance in humans with current occupational lead exposure. Yet it did modify the relationship between blood lead and neurobehavioral test scores. The authors suggest individuals with higher PKC activity may be at higher risk for neurotoxic effects of lead. This is the first study providing evidence that PKC does play a role in the lead toxicity. Limitations of the study include reliance upon a single lead biomarker (blood ), and extrapolation of information about PKC function in erythrocytes to its function in nervous tissue, though no direct studies provide evidence for this. 4. Chouaniere D, Wild P, Fontana J, Hery M, Fournier M, Baudin V, Subra I, Rousselle D, Toamin J, Saurin S, Ardiot M. Neurobehavioral disturbances arising from occupational toluene exposure. American Journal of Industrial Medicine. 41: 77-88 (2002). A EURONEST study examining the neurotoxic effects of low-level toluene exposure. A cross-sectional study of 128 workers from 2 French printing plants. Neurotoxic effects were assessed by a self-administered EURONEST questionnaire investigating neurobehavioral performance, mood, neurologic and psychosomatic symptoms. A computerized NES test battery measured attention, memory, learning, and psychomotor speed. Ambient air measures of toluene were collected from both plants. Subjects with acute exposure were excluded. Adjusting for confounders, results showed impaired performance in short-term memory function . Multiple regression analysis showed significant relations between current exposure and decrements in test scores for digit span forwards and digit span backwards, equivalent to an aging factor of 25 and 14 years respectively. Other cognitive functions did not show impairment. No association was found between cumulative exposure and test performance or neurotoxic symptoms. The authors conclude current low level toluene exposure may cause early decrements in memory function. Non-participation of the most highly exposed group may underestimate the relation of toluene to impaired neurobehavioral functions. More specific cognitive function tests and long-term follow up are recommended for further study. 1. Michalek J, Akhtar F, Arezzo J, Garabrant D, Albers J. Serum dioxin and peripheral neuropathy in veterans of Operation Ranch Hand. Neuotoxicology. 22 (4): 479-490, 2001. This study examines the relationship of exposure to Agent Orange containing (2,3,7,8-tetrachlorodibenzo-p-dioxin) and the development of peripheral neuropathy. The subjects were veterans of Operation Ranch Hand, responsible for application of aerial herbicide in Vietnam from 1962-71. Data obtained from an ongoing Air Force study provided measurements of PNF, NCV's, and vibrotactile thresholds. Subjects were categorized to low, medium, and high exposure, relative to serum dioxin levels measured at the outset of the study. Subjects were screened for DM at each exam. Results from 1992 and 1997 indicate a consistently increased risk of peripheral neuropathy (probable and diagnosed) in the high exposure group of ORH vets, suggestive of an adverse effect. The study is limited by incomplete knowledge of dioxin exposure, confounding by the high number of vets with DM, and changes in peripheral neuropathy status between subsequent exams. A cross-sectional survey of 167 shoe workers was conducted to assess whether chronic exposure to organic solvents is associated with neuropsychiatric and mucus membrane irritation. Prevalence of self-reported health problems were compared with history of long-term occupational exposure to solvents and plastics. Prevalence ratios were obtained by Cox regression, and adverse health effects were reported within 95% CI's. Tingling limbs (PR=1.8), sore eyes (PR=1.9), and impaired breathing (PR=2.0) were related to cleaning tasks; tingling limbs (PR=1.8) and sore eyes (PR=1.7) were common to plastic work; impaired breathing (PR=1.9) was associated with varnishing. The study indicates a strong association between self-reported health complaints and exposure to organic solvents for the observed tasks. Volatile organic solvents (dichloromethane, n-hexane) and plastic compounds (isocyanates and polyvinyl chloride) may be responsible for the high prevalence of complaints. Absence of ventilation and suitable protection increased exposure levels. Underestimation of complaints is possible, due to selection of subjects by factory management, rather than by random selection. 3. Kaufmann H, and Hinsworth R. Why do we faint? Muscle and Nerve. 24: 981-983, 2001. An editorial discussing proposed mechanisms for vasovagal syncope. Hypotension and bradycardia leading to insufficient cerebral perfusion cause loss of consiousness and postural tone. Proposed mechanisms for hypotension include sympathetic activation of B2 adrenergic receptors, reactive hyperemia, and NO-mediated vasodilation. Causes of fainting may be of central origin (vasodilation and bradycardia in response to intense fear or emotion), or may be due to failure of normal homeostatic reflexes or overstimulation of depressor reflexes (ie. Carotid sinus syncope, valsalva-like maneuvers, micturation, orthostatic hypotension.). Complete mechanism for vasovagal syncope remains unclear. The trigger mechanism involves both central stimulation and peripheral reflexes. Fainting is related to decreased effective blood volume and the effect can be mediated by volume expansion. 1. Grandjean P., Weihe P., Burse VW., Needham LL., Storr-Hansen E., Heinzow B., Debes F., Murata K., Simonsen H., Ellefsen P., Budtz-Jorgensen E., Keiding N., White RF. Neurobehavioral deficits associated with PCB in 7-year-old children prenatally exposed to seafood neurotoxicants. Neurotoxicology and Teratology. 2001, 23: 305-317. This study analyzed mercury-associated neurobehavioral dysfunctions in a cohort that had been exposed to PCBs through their normal diet of whale meat. Pre-natal exposure to PCB's was examined by analysis of cord blood and tissue from a Faroese birth cohort (1986-87) of 435 children. Cord blood showed excellent correlation to cord tissue concentrations of PCB's in 50-paired samples. Cohort children were clinically examined at 7 years old, and neuropsychological deficits were compared to PCB levels of stored cord blood samples. Of 17 neuropsychological outcomes, cord PCB concentration was associated with deficits on the Boston naming Test (basic language function), the Continuous Performance Test reaction time (attention), and possibly in the California Verbal Learning test (long term recall). Increased thresholds were seen in 2/8 frequencies on audiometry, on the left side only. PCB-associated deficits were only apparent in children with higher levels of mercury exposure. An interaction of the two neurotoxicants is possible. This cohort showed limited PCB-related neurotoxicity, with possible confounding by methylmercury exposure. 2. Dick RB., Steenland K., Drieg EF., Hines CJ. Evaluation of acute sensory-motor effects and test sensitivity using termiticide workers exposed to chlorpyrifos. Neurotoxicology and Teratology. 2001, 23: 381-393. Sensory and motor testing was performed on a group of termiticide workers exposed to chlorpyrifos to evaluate the acute effects of exposure and the sensitivity of the measures to detect effects. The study group consisted of 106 termiticide applicators and 52 non-exposed controls. Current exposure was determined by urinary levels of TCP (3,5,6-trichloro-2-pyridinol). The mean TCP for applicators was 200 micrograms/g creatinine. Sensory-motor tests recommended by a NIOSH-sponsored panel were employed. Tests of olfactory dysfunction, visual acuity, contrast sensitivity, color vision, vibrotactile sensitivity, tremor, manual dexterity, eye-hand co-ordination and postural stability were analyzed. Results showed minimal acute effects from exposure to pesticides using urinary TCP as a measure of current exposure. A possible sub-clinical effect involving proprioceptive and vestibular pathways is suggested by a measured effect on postural sway. Evaluation of the sensitivity of the sensory and motor tests employed, using MADD (minimal absolute detectable difference) as an indicator of effect size, showed that either a larger study group or a greater exposure effect was needed to determine a significant relationship. 3. Lauria G. MD, Sghirlanzoni A. MD, Lombardi R, and Pareyson D. MD. Epidermal nerve fiber density in sensory ganglionopathies: clinical and neurophysiologic correlations. Muscle and Nerve. 2001, 24: 1034-1039. The involvement of somatic unmyelinated fibers in sensory ganglionopathies was assessed by skin biopsy and quantitative sensory testing . The study group consisted of 16 patients with ganglionopathy, 16 with axonal neuropathy, and 15 normal controls. Skin biopsy was performed at the proximal thigh and distal leg. Neuropathy patients showed a greater proximodistal gradient of IENF density than controls, suggesting a loss of cutaneous innnervation in a length-dependent fashion. In contrast, ganglionopathy patients with hyperalgesic symptoms showed global rather than proximo-distal degeneration of sensory neurons. This was consistent with the clinical and neurophysiologic observations, and distinguished ganglionopathies from axonal neuropathies. Thalidomide Neuropathy in Patients Treated for Metastatic Prostate Cancer. Muscle and Nerve. 2001, 24: 1050-1057. A prospective study of peripheral nerve function was conducted in a cohort of 67 patients with metastatic hormone-refractory prostate cancer. The study was designed to assess the clinical and neurophysiologic features of thalidomide neuropathy and to assess the efficacy of the SNAP (sensory nerve action potential) index as a monitor for early onset. Patients were treated with thalidomide in an open-label and evaluated for induced neuropathies by neurologic exams and nerve conduction studies prior to treatment and at 3 month intervals. Average values were determined from SNAP values for the median, radial, ulnar and sural nerves in each patient. Of 67 initial patients, 55 discontinued treatment due to lack of therapeutic response; 24 remained at 3 months, 8 remained at 6 months, 4 remained at 8 months, and 3 remained at 9 months. 6 patients developed neuropathy, with clinical symptoms and a decreased SNAP index (at least 40% decline from baseline) occurring simultaneously. Older age and cumulative dose are possible contributing factors. The study concludes that poorly reversible neuropathy may be a complication of thalidomide treatment in older patients and recommends close clinical and electrophysilogic evaluations during treatment, and discontinuation of treatment as paresthesias develop. The SNAP index may be used to monitor development of peripheral neuropathy, but is not a useful tool for early detection. Limits of this study included the lack of an untreated control group, and the withdrawal of most patients from the study as their cancer progressed or neuropathic symptoms increased. 5. Chislom JJ. Evolution of the Management and Prevention of Childhood Lead Poisoning: Dependence of Advances in Public Health on Technological Advances in the Determination of Lead and Related Biochemical Indicators of Its Toxicity. Environmental Research. 2001, 86: 111-121. A review of technological advances as the basis for improvements in the recognition and diagnosis of childhood lead poisoning, and public health activities directed at its prevention. The history is divided into three stages: Pre-dithizone era, dithizone era, and atomic absorption/electrochemical era. The pre-dithizone era depended primarily on the identification of symptoms for diagnosis and case documentation. In the 1930s the development of the dithizone procedure allowed measures of lead in blood, urine and tissues to be determined. This allowed for more accurate documentation of cases including encephalopathic, mildly symptomatic, and asymptomatic cases. The urinary UCP test and ALAU test made possible the screening of children in hospitals, and greatly facilitated the diagnosis and management of cases. The advent of chelating agents CaN2 EDTA and BAL allowed analysis of paint samples, and assisted public health activities in management. Volume capacity and the speed of analyzation improved dramatically with the development of x-ray fluorescence. Developments in the atomic absorption spectrophotometric and electrochemical era allowed advancements in lead analysis and provided a basis for improvements in the research in lead toxicity and public health steps to decrease its occurrence. Today advanced lab technology is more than adequate for screening the necessary volume of samples, yet public health procedures reach only ½ to 2/3 of high risk children. The study examines the relationship between PD and exposure to environmental factors such as living in a rural area or on a farm, well water use, farming, exposure to farm animals, and pesticides. It involved a meta-analysis of peer-reviewed case-control studies; 16 for living in rural areas, 18 for well water drinking, 11 for farming, and 14 for pesticides. All were evaluated for statistical significance, heterogeneity, and publication bias. Significant heterogeneity was detected among studies and a combined odds ratio was calculated. The meta-analysis concludes living in rural areas, drinking well water, farming and exposure to pesticides cause a small but significant increased risk for development of PD due to potential exposure to neurotoxins. The authors suggest future cohort studies be conducted for prospective evaluation of the association between environmental factors and PD, as well as studies targeting specific occupational groups; future studies must also examine composition of soil, water, pesticides and fertilizers, application processes and potential lifestyle differences of subjects. 1. "Neuropsychological Function in Gulf War Veterans: Relationships to Self-Reported Toxicant Exposures" White, RF, et al. American Journal of Industrial Medicine Vol 40. pg 42-54 (2001). Four years after the Gulf War, military personnel who had been exposed to neurotoxicants and military personnel who were not exposed to neurotoxicants were tested on several specific tests over the following domains: general intelligence, attention/executive function, motor ability, visuospatial processing, verbal and visual memory, mood, and motivation. Lower scores on neuropsychological tests were observed among the gulf-deployed group compared to the non-gulf-deployed group, in the areas of attention and executive function and mood. However, after effects for PTSD and psychological diagnoses were controlled for, only mood effects were observed at significant levels. 2. "Prospective Study of Caffeine Consumption and Risk of Parkinson's Disease in Men and Women" Axcherio A, et al. Annals of Neurology, Volume 50, No. 1, July 2001 The consumption of caffeine from different sources and the risk of Parkinson's disease in men and women was examined prospectively within the HPFS (Health Professionals' Follow-Up Study) and the NHS (Nurses' Health Study) cohorts. The study population was followed for 10 years in men, 16 years in women. In men, an inverse association was observed with consumption of caffeine (including coffee, noncoffee sources, and tea). The relative risk in men was 0.42. Among women, the relationship between caffeine and risk of Parkinson's was U-shaped, with the lowest risk observed at moderate intakes, 1-3 cups of coffee/day. 3. "Environmental Pesticide Exposure as a Risk Facor for Alzheimer's Disease: A Case-Control Study" Gauthier, E, et al. Environmental Research Section A Volume 86, pp 37-45 (2001) After controlling for obvious confounding factors such as genetics, occupational exposure and sociodemographic factors, there appeared to be no significant risk for Alzheimer's disease with exposure to herbicides, insecticides, and pesticides. Statistical analysis with logistic regression was performed on a randomly selected elderly population from Quebec, Canada. This area is geographically isolated, so that evaluating the long-term environmental exposure is easier. 4. "Amyotrophic Lateral Sclerosis in a Battery-factory Worker Exposed to Cadmium" Bar-Sela, S, et al. International Journal of Occupational and Environmental Health Vol 7, pp 109-112 (2001). This is a case-control report of a 44 year old man who had nine years of heavy exposure to Cadmium from working in a nickel-cadmium battery factory. He had classic signs of cadmium toxicity and was diagnosed with ALS nine years after starting work at the factory. Cadmium perhaps causes neurotoxicity by several mechanisms suggested in this paper. 5. "Symptoms of Gulf War Veterans Possibly Exposed to Organophosphate Chemical Warfare Agents at Khamisiyah, Iraq" McCauley LA, et al. International Journal of Occupational and Environmental Health Vol 7, pp 79-89 (2001). Nine years after the Gulf war in 1991, 2,918 U.S. Gulf War veterans were interviewed by telephone for symptoms past and present indicative of organophosphate chemical exposure. Veterans who were located within 50 km of a known episode of sarin/cyclosarin release (in Khamisiyah in Jan-Mar 1991) were compared with those that were outside of this 50 km radius. There was no significant difference in current symptoms experienced by the two groups. However, there was a significant different in self-reporting of symptoms as recalled by the within 50 km group within two weeks of the known chemical exposure. These symptoms are consistent with low-levels of sarin exposure. 6. "Criteria for the work relatedness of upper-extremity musculoskeletal disorders" Scandanavian Journal of Work and Environmental Health 2001, Vol. 27, Suppl 1. This is a practical article describing work factors that cause UEMSD. The physical factors include extreme posture, high repetitiveness, most of the day, high force, and little recovery time. None physical factors include perceived work stress, work tempo, work pressure, mental demands, and social stress. There is a four step process of making decisions as to the work relatedness of an UEMSD. There are many descriptive, practical tests described and illustrated to test for various UEMSDs. 7. "Effects on the nervous system in different groups of workers exposed to aluminium" Iregren, A et al. Occupational and Environmental Medicine 2001;58:453-460 Exposure to aluminium was evaluated with aluminium blood and urine levels in groups of aluminium pot room and foundry workers, aluminium welders, and aluminium flake powder producers. These groups were evaluated for neurotoxic effects at different dose levels with mood and symptom questionnaires and psychological and physiological tests. The control group was steel welders. There was no statistical neurotoxic effect on the aluminium pot room, foundry, or flake powder producers. There was a subtle neurological effect on the aluminium welders at a urinary concentration of aluminium above 100 microgm/l. 1. Biernat H, Ellias SA, Wermuth L, Cleary D, De Oliveira Santos EC, Jorgensen PJ, Feldman RG, and Grandjean P. Tremor Frequency Patterns in Mercury Vapor Exposure Compared with Early Parkinson's Diseaae and Essential Tremor. Neurotoxicology 20 (6): 945-952, 1999 New instrument allows measurement of tremor intensities at different frequencies. 81 healthy controls showed higher preferred hand tremor intensity, dependant on age. 10 parkinsonian patients showed increased intensity within the lower frequencies, 3-6.5 Hz. 10 patients with essential tremor had peak frequencies in both windows, some only on one side. 63 Brazilian gold traders exposed to HG vapor showed increased intensity in the high frequency window. Their urine mercury levels were correlated with the number of burning sessions per week, though did not correlate with intensity of tremor within the high frequency window. 2. Viaena MK, Masschelein R, Leenders J, DeGroof M, Swerts LJVC, Roels HA. Neurobehavioral effects of occupational exposure to cadmium: a cross sectional study. Occ Env Med. 2000; 57: 19-27. 89 adult men, 42 exposed to Cd were given blinded standardized examination consisting of computer assisted neurobehavioral test battery. Cd workers performed worse on visuomotor tasks, symbol digit substitution and simple reaction time to direction or location. Complaints consistent with peripheral neuropathy, concentration and equilibrium correlated with Cd U. 3. Arnetz., BB. Model development and research vision for the future of multiple chemical sensitivity. Scand J Work Env Health. 1999; 25 (6): 569-573. A two step model for the pathophysiology of MCS is presented. One, different environmental stressors act as initiators and the limbic system and two, other parts of the brain become sensitized and hyper reactive to environmental triggers. Odor acts as an important trigger. 4. Jones TF, Craig AS. Hoy D, Gunter EW, Ashley DL, Barr DB, Brock JW, Schaffner W. Mass Psychogenic Illness Attributed to Toxic Exposure at a High School. NEJM. 2000, 342, 96-100. A gasoline like smell in a high school was noticed on a day in November 1998. Soon after many students and teachers noted symptoms. After the school was evacuated, students were evaluated in the local hospital emergency room. Five days after the school was reopened, another group of people complained of symptoms. No cause was identified for these symptoms. A questionnaire revealed that the symptoms reported were associated with the female sex, seeing another person ill and knowing that a classmate was ill and reported an unusual odor at the school. The illness was attributed to mass psychogenic illness. 5. McGill C, Boyer LV, Flood TJ. Ortega L. Mercury cream due to Use of a Cosmetic Cream. JOEM. 42, 1, January 2000. Arizona Dept of Public Health assessed urine mercury of those who used a mercury containing beauty cream. 66 of 88 were found to have levels >20 ug/l, 55 people were evaluated in a clinic. No major abnormalities were noted on physical exam. 139 days later, the Uhg means lowered from 170 ug/l to 32 ug/l at the final test. Neuropsychiatric symptoms were frequently reported without objective signs. 6. Letzel S, Lang CJG, Schaller KH, Angerer J, Fuchs S, Neurndorfer B, Lehnert G. Longitudinal study of neurotoxicity of occupational exposure to aluminum. Neurology 2000; 54:997-1000. Biological monitoring, neuropsychological testing and P300 potentials were measured in 32 aluminum dust exposed workers and matched controls. Exposure ranged from 2.8- 42 years in the industry, median of 13.7 years, with median ears of exposure to Al at 12.6. Results did not differ for the subjects versus controls. Authors noted that chronic exposure to Al dust a the levels documented does not induce measurable cognitive decline. 7. Marie RM, Le Biez E, Busson P, Schaffer S, Boiteau L, Dupuy B, Viader F. Nitrous Oxide Anesthesia- Associated Myelopathy. Arch Neurology, 2000; 57: 380-382. Two weeks after surgery, a 69 year old man developed ascending paresthesia of the limbs, ataxia and sensory loss of all 4 limbs with absent reflexes. Four months after the procedures, he had paraplegia, severe weakness of the upper limbs and cutaneous anesthesia sparring the head. Pernicious anemia was initially diagnosed, after serum levels and a Schilling test. NO exposure has been reported to cause subclinical cobalamin deficiency, it can trigger or worsen neurological consequences of this deficiency. This should be addressed after undergoing a surgical procedure, even in the absence of hematological changes. 1. Khattak S, K-Moghtader, McMartin K, Barrera M, Kennedy D, Koren G. Pregnancy Outcome Following Gestation Exposure to Organic Solvents. JAMA. 1999, 281,12, 1106-1109 125 pregnant woman exposed to organic solvents during their occupation where seen in their first trimester between 1987 and 1996. Each was matched to a female with exposure to a non teratogenic agent on age, gravidity and smoking and drinking status. Occurrence of major congenital malformation was the outcome measure. 13 major malformation occurred in the exposed group versus 1 in the non exposed group. (RR 13.0; 95% confidence interval, 1.8-99.5) 12 occurred among 75 women who reported symptoms during their work from exposures. None occurred in 43 asymptomatic exposed females. More of the exposed had previous miscarriage than the controls (54/117 versus 24/125; p<.001) Women with previous miscarriages had the same rate of major malformation than those who were not exposed. The organic solvent exposed group was composed of 37 women who worked in a factory, 21 who were laboratory technicians, 16 artists, 14 in the printing industry, 13 chemists, 8 painters, 4 office workers, 3 car cleaning workers, 3 veterinary technicians, 2 orthotists, 2 funeral home service workers and 1 carpenter and 1 social worker. Women with occupational exposure to organic solvents had a 13 fold risk of major malformations as well as an increased risk of miscarriages in previous pregnancies while working with organic solvents. 2. Feldman RG, Ratner MH, Ptak T. Chronic Toxic Encephalopathy (CTE) in a Painter Exposed to Mixed Solvents. Environmental Health Perspectives, 107, 5, 1999. A 57 year old painter was exposed to various solvents for 30 years of employment developed short term memory impairment and changes in his affect which progressed until exposure ended (he could no longer perform his work duties). Serial neuropsychological testing after exposure ceased revealed persistent cognitive problems without further progression. MRI revealed global and symmetrical volume loss involving more white than gray matter. Findings were deemed consistent with toxic encephalopathy. The differential diagnosis of dementia is discussed. An absence of naming problems (anomia), preservation of language skills and the lack of progression of dementia is atypical for Alzheimer's disease and consistent with chronic toxic encephalopathy. The diffuse MRI findings were consistent with other reported cases of CTE. 3. Buschke H, Kuslansky G, Katz M, Stewart WF, Sliwinksi MJ, Eckholdt HM. Lipton RB. Screening for dementia with the Memory Impairment Screen. Neurology. 1999, 52, 231-238 TO improve screening for AD and dementia, a Memory Impairment Screen (MIS) is a 4 minute four item delay free and cued recall test of memory impairment. The MIS uses controlled learning to ensure attention, induce specific semantic processing and optimize encoding specificity to improve detection of dementia. Reliability was tested. MIS had good sensitivity and sensitivity as well as PPV. Each participant was presented with an 81/2" by 11" sheet of paper with four MIS items to be recalled in 24 point upper case letters. Each item belonged to a different category. The individual was asked to identify and name each item aloud and then asked to identify and name each item ("potato") when the tester said its category cue ("vegetable"). The sheet was then removed. After a nonsemantic interference task (repeated counting from 1 to 20 and back) lasting approximately 2 to 3 minutes, the individual was asked for free recall of the four items in any order. The category cues were then presented to elicit cued recall of only those items that were not retrieved by free recall. The number of items retrieved by free recall and the number retrieved by cued recall were recorded. The MIS score is as calculated (2 x free recall + cued recall). Mean MIS scores were 2.5 for dementia (including AD), 2.1 Ad only, and 7.2 for non dementia subjects. Age for all three groups was near 80; education was 11-12 years for all three categories. 4. Gifford DR and Cummings JL. Evaluating dementia Screening tests. Neurology. 1999, 52, 224-227. Review article. Limitations for the MIS are mentioned. Despite these.the MIS appears to be reliable and valid. 5. Cornblath DR, Chaudry V, Carter K, Lee D, Seysedadr M, Miernicki M, Joh T. Total neuropathy score. Validation and reliability score. Neurology 1999, 53, 1660-1664. The TNS provides a single measure to quantify neuropathy. It includes grading of symptoms, signs, nerve conduction studies, and quantitative sensory testing. Inter and intra rater reliability was excellent (0.966 and 0.986). Sensory symptoms, motor symptoms, autonomic symptoms, pin sensibility, vibration sensibility, strength, tendon reflexes, vibration sensation (QST vibration), sural amplitude and peroneal amplitude are rated on a 0-4 point scale. 6. Peripquet MI, Novak V, Collins MP, Nagaraja HN, Erden S, Nash SM, Freimer ML, Sahanek Z, Kissel JT, Mendell JR. Painful sensory neuropathy. Neurology 1999, 53, 1641-1647. The role of skin biopsy in establishing a diagnosis of neuropathy. Interepidermal nerve fiber (IENF) density is tested. Both large and small fiber neuropathy are characteristic of presentation of neuropathy. IENF has been proposed to be able to diagnose small fiber neuropathy. 38 % of patients referred to a Neurology clinic with painful extremities had reduced IENF densities. NCVs had been normal in these patients. Thus IENF was more sensitive in this population of patients. 51% had abnormal NCVs and did not have IENFD. IENF was more sensitive than quantitative sudomotor axon testing and quantitative sensory testing. 7. Herrmann DN, Griffin JW. Hauer P, Cronblath DR. McARthur JC. Epidermal nerve fiber density and sural nerve morphometry in peripheral neuropathies. Neurology 1999, 53, 1634-1640. 26 patients with neuropathic complaints were studies with NCV, distal leg skin biopsy and sural nerve biopsy. IENF and myelinated and unmyelinated fibers in the sural nerve amplitudes were examined. Reduced IENF was the only indicator of small fiber depletion in 23% of the cases. It was normal in acquired demyelinating neuropathies and where clinical suspicion was low. Distal leg IENF density may be ore sensitive than sural nerve biopsy in identifying small fiber sensory neuropathies. 8. Kennedy WR and Said G. Sensory nerves in skin. Answers about painful feet. Neurology. 1999, 53, 1614-1615. Review article regarding IENF density testing. Reduced ENF occur with diabetic neuropathy, sensory neuropathy, including HIV and small fiber neuropathy among others. ENF may be the first detectable sign of neuropathy and perhaps can detect changes over time, as during progression of disease or therapeutic trials. The skin biopsy and the new non invasive skin blister method give the neurologist the tools make an early diagnosis of epidermal nerve damage in patients with metabolic, inherited or toxic neuropathy.
On Wednesday 9 January 2019, the Centre for Innovation Management Research (CIMR) held a PhD workshop featuring three presentations and a panel discussion. - 2.15 pm – Welcome, tea and coffee - 2.30 pm – Presentation: ‘Longitudinal Investigation of Leader Development among Saudi Women Academics: How Does Readiness Matter?’ – Tahani Alharbi (.pdf) Abstract: Saudi Arabia’s (SA) top leadership positions in higher education are male-dominated despite increasingly better-qualified women academics. Hence, this research has two goals: an in-depth analysis of why women academics are mostly excluded from senior roles in SA and an examination if SA’s female academics are ready to develop as leaders once opportunities arise. Taken longitudinally, this study will illuminate how recent rapid political, social and cultural changes in SA have contributed to changes in women’s self-perceptions regarding their leadership potential and what hinders or accelerates this process. Building on developmental leadership readiness literature (Avolio and Hannah, 2008, 2009) and the role congruity theory of prejudice toward female leaders (Eagly and Karau, 2002), this research contributes to the scarce knowledge and practice concerning leadership developmental readiness from a gender lens in SA. Methodologically, because research and theories are dominated by the Western-North American context, adopting a narrative analysis approach will contribute to a more robust understanding of female academics’ experience in SA. Preliminary findings suggest Saudi women’s ability to claim and grant leadership roles is context-dependent. Yet, initial findings reveal gender inequality practices that impact Saudi women’s advancement are institutionalised within the country’s higher educational system. Finally, there is growing evidence that new socio-political reforms in SA are changing perceptions toward accepting more women in elite positions; however, there is a long way toward the social legitimacy of women academics. - 2.50 pm – Presentation: ‘The Experience and Role of Entrepreneurial Passion among Tech Founders During the Founding Stage of their Venture’ – James Brook (.pdf) Abstract: Entrepreneurial enterprises in the Tech and Digital sectors form an increasingly important part of the UK economy. One of the critical factors determining the success of entrepreneurs is how they persevere and remain motivated during the early start-up and growth phase on their venture when challenges, setbacks, failures and frustrations are common. A related area to entrepreneurial motivation is an entrepreneurial passion (EP). This has become deeply embedded in the folklore and practice of entrepreneurship but remains poorly researched and understood. James will outline the results of his field research, based on fourteen semi-structured interviews with Tech and Digital entrepreneurs in the UK. He will also talk about the next stage of his research, a longitudinal study to examine how entrepreneurs experience EP over the first crucial stages of their venture and the role it plays in helping them persist and achieve their goals. - 3.10 pm – Presentation: ‘Corporate-Startup Collaboration in AI Innovation Ecosystems: Motivations, Formats, and the Role of Stakeholders to Support Impacts’ – Filipe Martins (.pdf) Abstract: In a broader perspective, this PhD research explores the relationship between large and small enterprises through the corporate innovation lens. More specifically, the study focuses on the corporate-startup programmes phenomenon and will investigate the motivations, formats, and impact results of these kinds of collaborative engagement for the large companies and cohorts of startups. Special attention will be given to the role of other stakeholders in the innovation ecosystem in generating a positive and/or negative impact on the relationships between corporates and startups in the development and exploitation of new products and services. Therefore, this study will shed some light on this theme by investigating the Artificial Intelligence (AI) ecosystems in San Francisco (Silicon Valley), in the US, and London, in the UK, that respectively, hold the top two positions in the AI regional hubs raking (ASGARD, 2018).
Really good thoughts on how to approach SENs in your class. Putting labels aside is the first and biggest step, I think, although it can be difficult at times, particularly when your students and their parents are already somewhat prejudiced themselves. It’s an odd circumstance, when they feel they will be treated differently, mostly because – in my experience – they are used to that behaviour from previous schools or teachers. You as a teacher might not consider a student with SENs as someone who needs to be approached in a particular way, but what if their parents expect you to? Marie Delaney is a teacher, trainer, educational psychotherapist and author of ‘Teaching the Unteachable’ (Worth). She has worked extensively with pupils with Special Educational Needs and trains teachers in this area. Do you have learners with special educational needs (SENs) in your class? Have you had any training for teaching these learners? Probably not. In many countries across the world governments are promoting a policy of inclusion for learners with SENs. However, there is often a gap in training and resources for teachers to implement this. This has led many teachers to feel anxious and insecure about their teaching skills. There are some common fears and misconceptions which make a lot of teachers anxious. 5 myths that make teachers anxious - You have to be a specially trained teacher to teach learners with SENs Not true. Good teaching strategies will benefit all learners. Good classroom management and a positive attitude are things…
WINSTON-SALEM, N.C. – A major clinical trial of blood pressure medications has concluded that an inexpensive diuretic (water pill) is more effective in treating high blood pressure and preventing cardiovascular disease than newer more-expensive medications. The Antihypertensive and Lipid-Lowering Treatment to Prevent Heart Attack Trial (ALLHAT), conducted from February 1994 though March 2002, compared the drugs for use in starting treatment for high blood pressure. “The preferred drug is the diuretic for three reasons. It provides better control of hypertension; it reduces complications from hypertension—particularly heart failure—more effectively; and it is 10 to 20 times less expensive than the other drugs used in the trial,” said Curt Furberg, M.D., Ph.D., of Wake Forest University Baptist Medical Center, chairman of the study’s steering committee. ALLHAT is the largest study ever to compare different types of hypertension drugs. It included more than 33,000 participants age 55 years or older at 623 clinical sites in North America. All the patients had hypertension and at least one other coronary heart disease risk factor. Approximately 50 percent of the participants were women and 35 percent were African-American. “The bottom line is ALLHAT has shown that it matters which drug you use to control hypertension,” said Furberg. Participants were randomly assigned to receive the diuretic chlorthalidone, amlodipine (a calcium channel blocker sold under the name Norvasc) or lisinopril, (an angiotensin-converting enzyme (ACE) inhibitor sold under the names Prinivil and Zestril.) In 2000, another drug used in the trial, doxazosin (an alpha-blocker sold under the name Cardura) was pulled from the trial because it was determined early in ALLHAT that it was not as effective as the less expensive diuretic medication. The study does not recommend that patients stop taking their medication if they are using a drug other than a diuretic. However they are encouraged to speak with their physicians about adding or switching to a diuretic for their treatment. ALLHAT’s findings indicate that most patients will need more than one drug to adequately control their blood pressure and one of the drugs should be a diuretic. The National Heart, Lung and Blood Institute, part of the National Institutes of Health, supported the study.
Capital punishment arrived in the colony of North Carolina as part of English common law. Even misdemeanors warranted harsh corporal punishment and a long list of felonies qualified for the death penalty. People were executed not only for murder but also rape, theft, arson, and assault. White colonists quickly began using this “sanguinary code,” as the British called it, to enforce the institution of slavery. Colonial lawmakers created laws that applied only to the enslaved, and special courts, run by enslavers, ordered executions. For a time, the law even provided monetary compensation to enslavers whose chattel had been executed. In antebellum North Carolina, people could be executed for helping an enslaved person escape or joining an anti-slavery rebellion, among many other things. According to death penalty historian Seth Kotch, in less than 50 years during the 1700s, North Carolina executed more than a hundred enslaved people. Meanwhile, the law empowered whites to punish their human “property” however they saw fit, while bands of white men patrolled for runaways. In the century following American independence, many Northern states abolished the death penalty. But Southern states, focused on maintaining slavery, retained and expanded harshly punitive criminal codes. In antebellum North Carolina, people could be executed for helping an enslaved person escape or joining an anti-slavery rebellion, among many other things. Emancipation and Reconstruction offered brief hope that African Americans would be allowed full citizenship. But after the federal government withdrew its protection, white supremacists seized political power using mob violence, lynchings, and the death penalty. In fact, lynchings and executions were often nearly indistinguishable. Most lynching victims were Black, as were most of those executed. Lynchings were often inflicted in response to transgressions of the racial order. Similarly, executions were conducted almost exclusively as punishment for crimes against white people. And even when the accused escaped the lynch mob, he was likely to be tried swiftly by a jury composed of local white men and executed publicly by a white sheriff. Between 1865 and 1910, North Carolina executed 160 people, and at least 119 of them were African American. Another 14 were listed as race unknown.
This essay examines “Cantopop electronic dance music,” a term that collectively designates the several sub-genres of electronic dance music which originated in Hong Kong. This electronic dance music emerged in 1998 and became the dominating club music in Greater China in the early 2000s. While scholars have positively appraised numerous aspects and subgenres of Cantopop they have never paid attention to local genres of electronic dance music. Popular music critics and professional musicians in Greater China also dismiss Cantopop electronic dance music as insincere or incompetent imitations of global (i.e., European and American) electronic dance music. in this essay I show that Cantopop electronic dance music has valuable sociocultural characteristics and I elaborate on what they are. Although this dance music has not inherited the many desirable sociocultural properties of Western electronic dance music, it has gained new ones through processes of cultural hybridization. I will illustrate, through analyzing a range of clubbing practices that hybridize singing with dancing, how Cantopop electronic dance music empowers local audiences by giving them a central role in music reproduction. Through examining the remixing of electronic dance tracks by local DJs and the rewriting of lyrics of dance tracks by local clubbers, I will explicate how local actors ingeniously hybridize, appropriate and re-articulate local and global musical materials.
Aaron Hicklin, editor-in-chief of Out magazine, recently celebrated the progress made on the depiction of ‘gay culture’ in American TV shows. He writes in The Guardian: In many ways the transformation of attitudes has been ongoing for decades, accelerated in large part by the impact of Aids, which reconfigured gay identity around community and relationships. In TV shows such as Glee and Modern Family, gays are no longer comic stooges or punchlines, their relationships treated with the same respect as those of their straight counterparts. They hold hands, they kiss, they even share the same bed. This was a quantum leap on 1990s shows such as Will & Grace, in which the gay characters had the whiff of “confirmed bachelors”, to use the archaic euphemism of obituary writers, rarely presented in functioning relationships, much less in love. (Via The Guardian.) While it is important to celebrate positive changes, there’s still a long way to go. Most of the shows Hicklin discuses play it relatively safe in their portrayal of queer sexualities, while the depiction of heterosexual desire and heterosexual sex is more overt. For example, Hicklin mentions the limitations of previous trail blazer shows, such as Will and Grace. When I used to teach gender and sexuality, most of my students thought that Will and Grace had done wonders for the acceptance of LGBTQ people. While it’s true that the show was both popular and progressive for its day, the gay characters (Will and Jack) were largely portrayed as sexless beings. They were rarely seen kissing the men they dated, nor did they show much physical affection with their lovers. The heterosexual women characters (Grace and Karen) were more overtly sexual in their antics, storylines and behaviour. Queer as Folk pushed the envelope a lot more with its portrayal of romance, relationships and sex (both the UK and US-Canadian versions). In the American version, on-again-off-again lovers Brian and Justin danced, laughed and had graphic sex on a regular basis (as did most of the other characters). Similarly the American show The L Word had sex scenes, but the focus was more on the women’s emotional entanglements. Hicklin also mentions True Blood – another show that is more diverse in its representation of characters, yet the show has been more graphic in its portrayal of heterosexual sex. Tara’s sex scenes with her girlfriend have also been more lingering than Lafayette’s sex scenes with his boyfriend. The changes in popular culture are headed in the right direction, but heteronormativity still prevails. That is, heterosexuality is still the focus and ideas of heterosexual desire still largely influence how LGBTQ characters are portrayed. The more they conform to male heterosexual desire (two women having sex), the more air time spent on a character’s love life. The converse is also true: the further a character is from “acceptable” male heterosexual fantasy (gay men), the less likely the audience is to see the character’s sexual lives.
The challenges for sensor technology in harsh environments A harsh environment includes, but is not limited to; extreme temperatures, high pressure, high shock, vibration, mechanical stresses, chemically corrosive substances and being exposed to elements or hazardous environments such as space. The list of applications and demands for needing such specialist/precision sensors keeps growing - automotive, structural health monitoring, seismic applications, gas turbine, aerospace, Industrial/ oil and gas explorations, and the marine industry to name a few. Considerations for harsh environment sensors A standard sensor would have limitations for performance under these conditions, especially outside of ‘normal’ temperature ranges between -40° and +125°, so sensors designed specifically for these environments need to be hardy and robust enough to withstand demanding conditions, whilst still performing; - Without failure - Fast response - Precision measurement - High performance - High reliability - Stable with extreme resistance We feature a wide range of high-quality specialist sensors from leading suppliers as well as having our own field engineers with experience of working with applications across a number of harsh environments able to advise on the best sensors or combinations from this portfolio, to achieve the best performance for each environment or application. We also understand that for such demanding applications, these specialist sensors may require further customisation in order to operate under these conditions, which we can also assist with. Engineers must consider an application for a harsh environment prior to its design as the environment will affect the specifications of the product and its performance. The top consideration is the protection and maintenance of the sensor. To maintain operation in a hostile environment, it would need to operate without failure or the difficulty (and expense) of maintenance or swapping it out. Therefore, choosing the right sensor, protection with the correct coatings, housings and materials is paramount for success. If you are unsure which one is right for your design given all the choices available – we can help with an extensive specialist portfolio and in-house expertise. Bespoke sensing and connectivity solutions Our range covers all key parameters across any environment, but to help solve more complex challenges for your application, we can design and build a complete solution to your needs with customer-specific combinations and custom-designed solutions along with wireless connectivity. So, if you’re not sure where to start, contact our sensor experts who can help guide you through all of the options and find the best choice for performance and protection for your application in a harsh environment. Outdoor and environmental hostile environments Sensors in these environments need to be able to tolerate high vibration and shock in applications such as; monitoring deterioration of bridges, dams and buildings in structural health monitoring, construction, agriculture or seismology equipment as well as changes in pressure, temperature and exposure to acids or alkalis such as saltwater in a marine environment. These conditions would compromise a standard sensor, which may not be able to withstand the stresses or corrosive substances. We recommend the following sensors that are specially designed for vibration applications that require low noise, low power, resistance to repetitive high shocks and insensitivity to temperature environments that will guarantee confidence of measurements in harsh environments with high accuracy. Safran Colibrys SI1000 series Designed specifically for strong motion class B seismic measurements, this small hermetically sealed sensor guarantees accurate and stable seismic measurements and requires no re-calibration or maintenance during its lifetime. - Ranges: +/-3, 5 g - Low Noise: 0.7 µg/vHz (+/-3g) - Bandwidth: 0-550Hz (+/-3g) - Non linearity : ±0.3% FS - Size : <1cm² - Embedded logic functions: Self-test, reset Humidity and temperature sensor IP67 Harsh environment combined humidity and temperature sensor - Ready to use - Fully calibrated and temperature-compensated - Water-resistant - IP67 Certified - Digital Output or Pulse Density Modulated (PDM) Output converted to Analog - Available in multiple flexible cable lengths - Precise and accurate (±2% RH, ±0.5°C, 14-bit resolution) - Low current consumption - Reliable in harsh environments - Flexible mounting options Tewa TT0 series The TTO series is IP68 waterproof temperature probe encapsulated with thermoplastic elastomer materials in over moulding technology, specifically designed for harsh environmental conditions. Excellent performance in extreme freeze-thaw conditions with a wide choice of insulation material. Are a perfect solution for applications where the best waterproof and moisture protection is required. - Excellent insulation against moisture - Degree of waterproof protection IP68 - Flexible size of tolerances (smallest tip diameter 4.5 mm) - Excellent resistance to UV (black insulation) - ROHS compliant and Halogen Free - Wide range of R/T characteristics - Marking possible on request NTC and PTC thermistors, PtRTD versions are available - Cable remains flexible at minimum design temperature Industrial applications are a challenge for thermistor sensors, needing to cope with heat, cold, humidity and repeated or rapid changes in pressure and temperature. These sensors need to be hardy and able to withstand constant changes in pressure, extreme temperatures from freezing cold to boiling hot and exposure to corrosive substances. These applications can vary across Oil & Gas exploration, Heating, Ventilation and Air-conditioning (HVAC), Refrigeration, industrial hydraulics that needs monitoring for safety and control purposes. Sensors recommended for these types of applications need superior mechanical strength and Robust casing protection, High heat resistance, fast response and excellent heat, oil and solvent proof with strong sealing. Shibaura advanced sensors These world-leading sensors provide best-in-class quality, precision, and extreme reliability. Their range of sensors is unmatched in performance, featuring a unique design and a fully automated, zero defect manufacturing process which means Shibaura thermistors work consistently - without failure or deterioration in harsh conditions and extreme temperatures. In addition, their sensors are highly customisable, offering a flexible choice of mouldings, assemblies, cable lengths and colours plus a variety of housings. Custom sensing assemblies are particularly useful for harsh and high-end applications, and our team of engineers, with more than 25 years of experience designing custom sensor probes and assemblies for a range of applications, will help ensure you choose the right sensor element, design a suitable housing, and add the appropriate cable and connectors. - MP2 - Best used in air conditioners for business use, oil coolers, showcase freezers. Measuring water temperature, oil temperature, ambient temperature - EP1 - Best used in automobile air conditioners & radiators, air conditioner outdoor units/pipes incl. discharge pipes, dehumidifiers, freezing machines, chiller lubricant, showcase refrigerators. Measuring solid inside/surface temperature, water temperature, oil temperature, ambient temperature - CC1 - Best used in - Air conditioner indoor units, washer dryers, refrigerators, showcase freezers and measuring ambient temperature Applications such as downhole drilling in the oil and gas industry, regularly require extremely reliable parts with ratings that often surpass military specifications. Extremely high-performance stability with shock resistance and the lowest non-linearity and noise in the MEMs marketplace - the TS1000T is definitely a best in class accelerometer perfect for high-temperature directional drilling applications, operating from -40ºC to +150ºC and up to +175ºC by intermittent temperature exposure. - High-temperature range: -40 to 175ºC - Long term bias repeatability: +2mg for +2g range - Excellent Bias Residual model: < ±0.6% mg for +2g range - Low Noise: 7 µg/vHz Variohm Eurosensor EPT3100 The EPT3100 Pressure transducer is a high quality all stainless steel pressure transducer that contains non-silicone oil and no internal 'O' rings. intended for use in the measurement of gases and liquids compatible with stainless steel. The EPT3100 strain gauge sensing element coupled with the latest ASIC circuitry gives: - High strength - excellent accuracy - a choice of high-level outputs - high performance and long stability - protected within a rugged, stainless steel housing. - Custom options and cabling are available - Available in gauge and absolute pressure, with ranges up to 5,000 Bar - Backed by a one-year warranty. Applications for automotive range from engine motors and converters, radiators, heaters, lithium batteries to EV charging, motor coils and air-con. The sensors in most of these applications are likely to be exposed to a range of substances, corrosive chemicals and high temperatures. High temperatures can cause some of the components such as the O-rings used in a standard sensor to deteriorate leading to performance failure. We have extensive experience in working with applications for automotive and recommend sensors from our partner Shibaura, who are a world leader in the field of advanced temperature sensor technology for harsh environment applications. Suitable for applications that are prone to frequent thermal cycles and high levels of humidity, their sensors give the extreme reliability, resistance and precision needed with a unique zero-defect design - featuring gold contacts in addition to waterproof glass coatings, resulting in almost no drift in the resistance over time and allowing the sensors to be used in permanent operating temperatures ranging from -60°C to +1000°C without failure. - EE1 - Automobile air conditioners & heaters, air conditioner indoor units, water heaters. Solid inside/surface temperature, ambient temperature - RT2 - Automobile inverters, water heaters, heat pump water heaters. Solid inside/surface temperature - EP2 - Automobile battery chargers, air conditioner outdoor units, water heaters. Solid inside/surface temperature There are also customisable options available for more challenging environments - with sensors that offer a flexible choice of mouldings, assemblies, cable lengths and colours plus a variety of housings.
With fall rapidly approaches and the steady bombardment of ads for notebooks, calculators, backpacks, there can only be on the culprit: Back to School season is officially in session. When I was a kid, the first day of school always brought up a mix of feelings. On the one hand, I was nervous about the day and all the particulars, but I was also excited to get back to a routine after the malaise of the summer vacation. These conflicting emotions always led to butterflies in the stomach before my parents dropped me off. But once I had made it to the classroom, saw my friends or began to meet new ones, and slung my backpack on my desk chair, I was ready and in game mode. Still, there was always the x-factor: the teacher. Would this steward of the next nine months of my life be up to the task to have the empathy, wisdom, and heart to assist me in my development as a human being… or would there be trouble ahead? Tackling the first day of school as a kid is already a monumental task, but for teachers, the stress is just as high. Prepping lesson plans, learning students’ names, and ensuring that the classroom is in tip-top shape for supreme learning potential are essential components to ensure that any first-day jitters are kept to a minimum. And in Mr. Wolf’s Class (Graphix), writer and artist Aron Nels Steinke’s latest graphic novel, readers experience the trials and tribulations of a tumultuous start to the new school year with a group of rollicking and (sometimes) unruly fourth graders. For you see, not only is it the first day at school for the students, but it’s the titular Mr. Wolf’s day as well. Can he handle the pressure? Can he survive the day? Not too long ago, I chatted with Aron, himself a public school teacher, to get the inside scoop of Mr. Wolf’s Class, the balance of creating comics and teaching full-time, what Mr. Wolf might do on his off time from teaching, making sure young readers can follow a diverse cast of characers, and what the second day of school might bring. AJ FROST: Hi, Aron. So great to chat with you today. As I was reading Mr. Wolf’s Class, I could feel with great intensity with all the emotions that one feels before the first day of school: nervousness, excitement, neurosis, and the eagerness to begin a new adventure. My first question is: Are you ready for the upcoming school year? ARON NELS STEINKE: I’m not yet but I will be. I’ve still got a few days before I head back. I’m taking advantage of every spare minute of summer to work on the Mr. Wolf’s Class series. When I’m doing comics I like to go all in. When I’m teaching I do the same. I don’t like being distracted by either career so I can focus and do my best with each. In this regard, my summers and weekends are devoted to comics and my family of course. That said, thoughts on teaching are always finding their way in: How will I set up my room differently this year, what projects and field trips will we do, how will I structure our routines? A teacher’s brain rarely shuts off. FROST: What emotions help guide you as you write and draw Mr. Wolf and his students? STEINKE: I want the book to be fun, first and foremost. Getting a laugh out of a reader is really my favorite thing. I also don’t want any bullies in my book. That’s not to say that there’s no conflict or bully behavior. I just do not want a character whose sole purpose in the book is to serve as a one-dimensional antagonist. I think that can be damaging and sends a message to kids that bullies are an expected experience that you have to tolerate. The kids in Mr. Wolf’s Class are working on developing respectful relationships with each other, even if there is some conflict and hurt feelings in the process. It’s strange because I didn’t really enjoy school as a kid. I was anxious and shy and had a hard time seeing the big picture. I did fine academically and I had some good friends but I would have much rather been home drawing, riding my bike, or exploring the woods. So when I became a teacher I was totally surprised when I found that most kids actually love school. The character Margot, who is also a rabbit, is the embodiment of the kid who never wants to miss a day away from her school community; she has a good home life but she thrives in the school environment. Of course, there are those kids for whom school is a refuge. Many kids do not have stability and support at home. I’m happy to report that year after year, the majority of my students seem to enjoy being in my class. Maybe it’s a testament to what I’m doing in the classroom. FROST: Besides taking place over the first day for students, the book also follows Mr. Wolf’s first day as a teacher. My impression is that this might be a second career for Mr. Wolf and he’s doing his best to make sure that everything goes smoothly for his students…and himself. Can you tell me a little bit about Mr. Wolf himself and where his character comes from? STEINKE: Do you mean that he has two careers going at once? Mr. Wolf is certainly a stand-in for me and I do have two careers but I don’t know what his second career would be. That’s fun to think about. I’d probably make him a painter in honor of my brother. An abstract expressionist, maybe. That sounds good–thanks for the inspiration! But if you mean that Mr. Wolf has switched from one career to another then that too comes from Mr. Wolf being my double. I started out in animation in my twenties and then when that didn’t work like I wanted it to, I started pursuing children’s publishing and comics. During that time I was working in food service to pay the rent. After several years of floundering around, I decided to go back to school and finish undergrad and then get my Master of Arts in Teaching degree. I was 30 when I got my first job as an elementary school teacher. Mr. Wolf’s Class takes place during Mr. Wolf’s first day of teaching at Hazelwood Elementary. He’s certainly not the young, fresh-out-of-college twenty-something teacher. He’s more like 32 or 33 years old and he’s really made this career a deliberate choice after moving on from something else. FROST: Part of the fun of this first Mr. Wolf book is that it explores a school day through the eyes of multiple characters. It’s a bit like Rashamon. Can you describe how you came to use this storytelling method? What are the most pressing challenges for using it in a children’s literature context, but also just to keep everything straight for yourself? STEINKE: When I was planning Mr. Wolf I was thinking about episodic television as well as the Robert Altman film, Gosford Park. In a good television series, you get multiple perspectives and complex characters that grow over the course of the series run. Gosford Park does a herculean task of developing more than two dozen characters in just over two hours and it’s so much fun to watch and re-watch how their narratives are woven together. I had never done a series or ensemble before and I really wanted to try that kind of thing out. If I get my way, there will be many more Mr. Wolf’s Class books, and I’d like to develop all of Mr. Wolf’s students, even those that just appear to be supporting or even background characters. I tried hard with this first book in the series to introduce as many personalities as possible without making the book too chaotic. I think one challenge is convincing myself that this model works. But I know it works because it works on television and it works in literature. It’s not a problem in Mr. Wolf’s Class because I’ve designed each character to look visually distinct from one another. Kids will typically read the book in about an hour. Then they’ll read it again, and they’ll read it again, and again, discovering new details and connections with each reading. Another challenge is sticking with the ensemble model because I do have characters that I’m really fond of and it’s easier for me to write for them. But I know the book series will be richer if I force myself to develop those other characters and think about what makes them unique. I love the Rashamon concept—how perception distorts our understanding of the truth through multiple characters’ memories around a shared experience. It’s something I often think about trying to do. I don’t think I quite achieved this with Mr. Wolf’s Class but I appreciate the comparison. FROST: Can you tell me a little bit about the eponymous Mr. Wolf? How much of his character is based on personal experience and how much is based on the general disposition of teachers or colleagues you’ve interacted with during the course of your career? STEINKE: Mr. Wolf is a fictionalized version of myself. He’s both me and not me. I started putting him in my autobiographical minicomic, Big Plans, back in 2007 and I’d use him as my stand-in whenever I was feeling hyperbolically cranky. Back then I had a frustration inside that stemmed from my inability to make a living wage. I had so much anxiety over my wanting to be an artist and to maintain a relationship with my then girlfriend (now wife). Without a way to significantly contribute to our finances, I felt so worthless. Once I got my first teaching job, I felt that anger and frustration melt away. While I was teaching I wanted to keep making comics but I didn’t have the time to really write and explore ideas so I decided to just write about what I knew—being in the classroom. That’s when Mr. Wolf as a character really started to mature and he began to represent my sweeter and more nurturing side. Those early Mr. Wolf webcomics and minicomics were written with the idea that these were universal experiences that would appeal to other teachers, too. It turned out they were useful in my classroom so I began thinking of my students as my readers. I would generally make a comic about something that happened during the day then I’d delete the words and print copies for my students. My students then would complete the story with their own word balloons, thought bubbles, and narrative captions—and they loved it! That’s when I had the idea to shift away from the minicomics and start writing Mr. Wolf’s Class as a graphic novel series for kids. FROST: One of the aspects of the book I appreciated the most was that you explore the backgrounds of all the students. Some are diligent, some are silly, some have challenging home lives that affect their school work. How important is it—as a pedagogical tool, but also narratively—to display to the reader that each student has a unique perspective? STEINKE: As a teacher, I’ve gotten to know a diverse range of young people. I did most of the concept work for Mr. Wolf’s Class while I was working at an incredibly diverse school. Our student population represented over 26 different language groups and, in a way, it seemed like a pluralistic utopia. It was the byproduct of a rapidly gentrifying city, however, it was still an incredibly welcoming place. We had refugees, asylum seekers, immigrants from North Africa, The Middle East, Southeast Asia, Central and South America. I want readers to find characters to identify with that maybe they didn’t think they would at first. I want to foster empathy without being preachy. I just want to reflect and represent the reality that I experience in the classroom but I also just want to encourage kids to read and have fun while doing it. FROST: As a follow-up, were the students you portrayed in the book based on real students of yours or were they all just part of your imagination? Maybe an amalgamation? STEINKE: Some of the characters were initially inspired by real people, and many were amalgams. Some characters started out as different versions of myself, even. But over time they have all gone on to have lives of their own. They live free now. When I sit down to write, the stories start to write themselves. I pull the thread and it unwinds. Sometimes I have to burn the thread and pull at another to get it just right. FROST: Mr. Wolf’s Class is the first in a series. When does volume 2 release? And what’s next after that? STEINKE: The next Mr. Wolf’s Class installment, Mystery Club, comes out on February 26th! Right now we’re just putting on the final touches to have it ready for printing. And I’m happy to report that I finished inking all 176 pages for Book #3, Lucky Stars, just this week. Now I can start coloring. My hope is that the Mr. Wolf’s Class does well enough that I can keep making more volumes. I would love to go back in time and share this book with myself when I was a kid–I think if I had the graphic novels that are available to kids now back when I was in school, I would have been much happier. We are truly living in a new Golden Age of Comics.
You have worked as a roofer for years but never had a serious injury until now when you fell from a ladder and were diagnosed with TBI. How does the doctor rate your injury, and how long can you expect workers’ compensation benefits to continue? A traumatic brain injury damages the neurons in the brain. A serious TBI has the potential to block access to information you have learned over a lifetime. Full recovery may not be possible, but through programs of rehabilitation and physical therapy, you can relearn forgotten skills and compensate for any lasting impairments. How rating works In terms of seeking workers’ compensation benefits for your work-related injury, a rating process will apply. This will indicate whether your TBI is an impairment or a disability. Workers’ compensation rates an impairment as a permanent condition that has reached “maximum medical improvement” and is not likely to change in the next year. If your injury is a disability, your doctor will assign a rating. Those who qualify to do so must have certification from the American Board of Medical Specialties. What to expect A serious brain injury almost always results in some level of impairment or disability that will remain throughout the lifetime of the patient. Therefore, depending on the severity of the brain damage you sustained, you may experience cognitive, emotional, physical or communication issues. If these are serious enough, you may need a program of rehabilitation either short-term or possibly for the rest of your life. While you concentrate on your recovery from TBI, you can rely on an advocate to help obtain the workers’ compensation benefits you deserve.
This leaflet has been produced to provide you with general information about your condition. Most of your questions should be answered by this leaflet but it is not intended to replace the discussion between you and the healthcare team it may act as a starting point for discussion. If after reading this leaflet you have any concerns or require any further explanations, please discuss this with a member of the healthcare team. What is a macular hole? The inner lining of the back of the eye is called the retina; it functions much like the film in a camera and transmits light signals to the brain. The most sensitive part of the retina is its centre and this is called the macula. A macular hole is a hole or a tear in the central macular region of the retina, this can form when the vitreous (a clear, jelly like substance) inside the eye has shrunk. As the vitreous has shrunk, it has pulled away from the retina, causing a hole or tear to form. In the majority of people, if their vitreous shrinks, it does not cause any problems and there is no damage to their vision. For patients who have developed a macular hole as a result of their vitreous shrinking, they may notice a number of symptoms. You will be aware of a defect or dark spot in the centre of your vision and may also be aware of reduced near and distance vision. You may also experience some distortion of straight lines and a loss of your central vision, where letters from a page of writing can seem to disappear. It is important to realise that this condition is not the same as age related macular degeneration. Do I need treatment? You can chose not to have any treatment. If you decide not to be treated you will most likely notice further loss of central vision. However you will retain your peripheral or side vision. There is a 1 in 10 risk of you having a macular hole develop in the other eye depending on the state of the vitreous jelly in the eye. If you decide not have the operation you will not go blind as only the central vision is affected by this condition, however it is likely that your eye sight will deteriorate further. You will need to have an annual eye check with your optician and they will be able to refer you back to us if your condition changes. If the hole has been present for a long time, for example more than a year, then the outcome of surgery will not be as successful. Following the operation it can take several months for the full effect to be realised. Surgery for macular hole This involves making three small incisions about 1 mm in length in the white of the eye. This will enable fine surgical instruments to enter the eye. The vitreous gel inside the eye is then removed and replaced with a salty fluid. Then, very fine forceps are used to peel away the membrane from the surface of the retina. At this point you will be asked to keep very still. Once this, the main part of the operation is completed, the retina lining of the eye is examined for any weak areas. If any areas of weakness are identified, you may need to have additional treatment, such as a freezing treatment (cryotherapy) or laser treatment, to those weak areas. This is in order to reduce the risk of a post operative complication called retinal detachment. During the operation you may be aware of pressure sensations around the eye and some shadows and lights inside the eye. The amount of pressure and lights can vary due to the intensity of the local anaesthetic. Please be reassured that this is quite normal as the retina is still functioning. Following surgery, you will need to use eye drops for up to six weeks whilst your eye heals. You will be advised on how and when to use the eye drops before leaving hospital. The drops are to prevent infection and to reduce inflammation around the eye. During the operation, a gas bubble will be inserted into your eye, this is in order to help the healing process; the bubble will act as an internal splint to support the retina as it heals. Gas can last from 2 weeks up to eight weeks, depending on the gas used. Your doctor will be able to confirm this with you when you are seen in outpatients. Blurred Vision after surgery: It is normal for your vision to be significantly blurred or poor after the operation and is due to the gas used to heal the macular hole. The gas will gradually absorb and you will notice a line in your vision that moves similar to a spirit level. You will start to see above the line, but under the line your vision will remain fuzzy or blurred. The gas will eventually disperse until it is only a small bubble in the bottom of your eye; the bubble will eventually disperse too. It is very important that you do not fly in an aeroplane for up to eight weeks following your operation, depending on the type of gas used during operation. Posturing after the surgery: In some cases following your operation you may be instructed not to sleep on your back for few days after the surgery. You may also be asked to keep your head in a specific position, in order to help the gas bubble do its job, you will be given a diagram and specific instructions on how to do this before you go home following the surgery. If you require a general anaesthetic for another operation, during the eight weeks period following your operation; it is essential that you inform the anaesthetist that you have gas inside the eye. This will be discussed with you before the operation and specific instructions will be given to you before you go home. What are the complications or risks of surgery? This type of surgery has similar risks to other types of eye surgery such as: - Infection in the eye – this is called ‘Endophthalmitis’ and is very rare but can give rise to serious loss of sight. - Bleeding inside the eye. - Retinal detachment – this can happen at any time following the surgery, but most commonly in the first 2-3 months after surgery, and would require further surgery to seal the retinal holes and repair the detachment. - Cataract – Almost all patients develop a cataract, (a cloudy lens which impairs vision) more rapidly than normal, following this type of surgery. This is because the internal fluid has been disturbed and also due to the presence of gas in the eye. In some cases, the cataract may be removed during the operation, to enable a clearer view for the vitrectomy surgery to be performed. Cataracts can also be removed in a separate operation. Your surgeon will discuss the best option for your eye condition. - High pressure inside the eye – You may require additional eye drops to control the pressure, for a period of time, following the operation. - Inflammation inside the eye. - Bruising to the eyelids and eye – This will settle after a few weeks. - Allergy to the medication used. What happens before the operation? Before the surgery you will need to attend a pre-operative assessment. At this visit you will be asked questions about your general health and blood tests and a recording of your heartbeat (sometimes referred to as an Electrocardiogram – ECG) are performed. These tests are to ensure that you are in good general health and well enough to undergo the surgery. Measurements will also be taken of your eyes. We will need to know what tablets and medicines you are currently taking and also if you are sensitive to anything else such as Elastoplast. It may be useful to write these down to bring to your assessment. At this visit, you will be advised on what pre-operative preparations you need to make, such as altering medications. How do I prepare for the operation? Please read the information leaflet. Share the information it contains with your partner and family (if you wish) so that they can be of help and support. There may be information they need to know, especially if they are taking care of you following this examination. This is major eye surgery which is normally carried out under local anaesthetic however in exceptional circumstances a general anaesthetic is offered. The operation is carried out as a day case and you will be instructed as to the time you are to attend the Eye Hospital in your appointment letter. You can expect to be in the hospital for most of the day. In some cases, it may be necessary for you to remain in hospital overnight, but this is uncommon. The following morning you will need to attend the ward for a quick eye check. - Due to space restrictions if you want to bring anyone with you we request only one person accompanies you. We will request that they come back to collect you later in the day. - Please bring a clean dressing gown and a pair of slippers with you along with overnight clothing and toiletries. - If you use a hearing aid please also bring this with you. - On the day of surgery take your usual medications as normal unless you have been advised to stop them prior to surgery. - Also, please bring with you any medications that you may need to take whilst you are in hospital including inhalers and sprays. - Please do not wear any make up nail varnish or jewellery, other than a wedding ring, as you will need to remove them before your operation. - It is advisable not to bring valuables or money to the hospital the Trust will not accept responsibility for loss or theft. - Please eat and drink normally before coming in to hospital unless you have been advised not to do so. - On arrival in the ward a nurse will meet you and confirm the information that you gave at your pre-assessment appointment. - We will give you eye drops before the operation. This is to dilate (widen) the pupil of the eye. - You will be asked to get changed into a hospital gown and your dressing gown and slippers. - Patients are offered drinks and light snacks free of charge. - Visitors are asked to use the dining facilities in the Eye hospital café, main hospital or at the Women and Children’s hospital. A drinks machine is also available on the ground floor of the Eye Hospital. - During your stay several checks are made, these are to ensure that you will be receiving the correct treatment. What happens afterwards? At this time you will be advised when you need to come for a check up, how to look after your eye and told when to use the eye drops you need, to help the eye to heal. Upon your discharge home please contact the ward immediately if you experience any of the following problems: - Excessive pain. - Loss of vision. - Increasing redness of the eye. - Discharge from the eye. Follow up appointments - You will be reviewed in the Eye Clinic after two weeks and then at eight weeks following surgery. Is there anything that I should avoid after the operation? Important points to follow Please remember the following points: - You do not need to keep the eye covered once the anaesthetic has worn off. - You should avoid heavy lifting and straining for the first week. - You should avoid getting shampoo and soap into your eye for four weeks. - You should avoid swimming for 12 weeks. - You must not drive until you reach the minimum legal standard of vision. - You should not drive until the anaesthetic has completely worn off and there is no double vision. - It is normal for the eye to appear red and feel gritty this is due to the membrane covering the white of the eye being sutured after the operation, some of these effects last for up to four weeks. The stitches are dissolvable but take several weeks to completely dissolve. General Advice and Consent Most of your questions should have been answered by this leaflet, but remember that this is only a starting point for discussion with the healthcare team. Consent to treatment Before any doctor, nurse or therapist examines or treats you, they must seek your consent or permission. In order to make a decision, you need to have information from health professionals about the treatment or investigation which is being offered to you. You should always ask them more questions if you do not understand or if you want more information.
Maria Sklodowska-Curie is among the most famous scientists in the world who have made a vital contribution to the development of science. About her work in the study of radium, Nobel prizes and the discovery of institutions know everything. br> But today, on the anniversary of her death, we want to talk a little bit about the personal history of this great woman, which is also worth to remember. Alas, it shows that for more than 100 years, little has changed in how society judges the actions of adults held on women. Talking about the relationship of the widowed Curie with the scientist Paul Langevin, who was married, but all the blame and the anger of the crowd when the affair became known, struck it at Mary. Maria Sklodowska-Curie, known as the first scientist who managed to receive two Nobel prizes. The first awards she won in 1903, when she and her husband Pierre Curie was awarded the prize in physics for research in the field of radiation. Then, in 1911, she received the Nobel prize in chemistry for the discovery of radium and polonium. But as it grew in prominence as a brilliant scientist, and increased the curiosity of the public to her personal life, especially after Mary was widowed in 1906. Marie and Pierre Curie in the lab four years after Pierre Curie died (he was run over by a horse-drawn carriage), Marie happened passionate affair with one of the former students of her husband, physicist Paul Langevin. After the death of Pierre Mary was appointed in his place in the Department of exact Sciences of the Sorbonne to teach a physics course, and became the first in the history of the University and of France in General, a female teacher. a scene from the movie “Marie Curie” (2016). In the role of Marie Curie Karolina gruszka Langevin by that time, however, was already married and raised four children. In addition, Mary was under the age of five years most of the radioactivity at the time was 43 years. Marriage Field was unhappy in one of the biographies described the episode as he appeared in the lab with bruises, admitting that his wife and mother-in-law beat him with a metal chair. According to people from their surroundings, the two brought not only a fanatical dedication to science, but also the inner emptiness itself Curie heavily experienced the death of her husband and for some time struggled with depression. Shortly before the novel with the Field she lost another, and his father, who was her support after the death of Pierre. the Lovers spend time together in Paris in a specially rented for meetings apartment near the Sorbonne, when the Langevin Jeanne wife suspected him of infidelity and decided to investigate. Paul Langevin and his wife, Joan She hired a detective that he stole a love�� letters, and began to blackmail them a few requiring you to stop not only personal, but also professional communication (the value of the work of Langevin she understood poorly, as he came from a family of merchants and rebuked spouse that he doesn’t earn). Jeanne had even threatened Curie’s murder, and then is handed the letter to his relative, who was the editor of a Paris newspaper, the reason was that Paul and Mary participated in the same conference. Of course, the letters were published. a scene from the movie “Marie Curie”: Ari Warchanter (Paul Langevin) and Karolina gruszka (Marie Curie) I’m trembling with excitement at the thought to see you again and also to tell you how much I missed you. Kiss you tenderly in anticipation of tomorrow, — personal declarations now read all the gossip of Paris. the French newspaper gladly seized on this scandalous story. They began to imagine Maria insidious the other woman, a Jewess who seduced Langevin, though a Jewish girl she was not. But it was done deliberately, because anti-Semitism was then common in Europe, but because such details only increased the public outrage and discontent. Marie Curie (left) and Paul Langevin captor husbands! Strumpet! — such epithets awarded the reporters Curie. in addition, she was accused of atheism, which in those days was a serious “disadvantage” in the eyes of public opinion. At the moment when the scandal only broke out, Sklodowska-Curie was at the conference in Belgium, in Paris, she came back when the flames of popular anger was underway full swing. It was about an angry mob, demanding the expulsion of the Curie from the country, so that Mary with her two daughters went into hiding with her friend, the writer of Marbo Camille. Marie Curie with her husband Pierre and daughter Irene in an effort to defend the honor of the curies, Langevin called one of the editors of the newspaper to a duel. The two men went out “to battle” and came face-to-face 25 November 1911, but it came to nothing — shoot both refused. On the side of the Curie rose and her colleague and friend albert Einstein. In the library of Harvard University kept his letter to Mary, which he wrote while in Prague, when the press began to attack the Curie for her affair with Paul. If the mob is going to pester you, just stop reading this nonsense. Save it for the vipers, for which this story was fabricated — wrote Einstein. albert Einstein and Marie Curie Defended Curie a scientist in his correspondence with another close friend, the Zurich physician Heinrich Zangara, calling into doubt that Maria could be a cruel temptress. Defended, however, in a rather peculiar��Oh a manner that may seem offensive. Curie has a sparkling intelligence, but despite her passionate nature she is not attractive enough to represent a threat to anyone — wrote Einstein. it is Noteworthy that in the midst of the scandal became aware of the awarding of Marie Curie’s Nobel prize in chemistry. However, in Sweden the prospect of the arrival of the woman who made such a contribution to science, reacted with skepticism — no one wanted the gala evening was tainted by rumors and scandals. Representatives of the Royal Academy of Sciences strongly recommended that Curie not to come to Stockholm. All my colleagues said that I don’t want you coming here. I also ask you to stay in France, because no one can predict what can happen during the presentation of awards. Honor, respect for our Academy and of science itself and of your country dictate that in these circumstances, you have rejected the idea of coming here to receive the award, — it was said in the letter, compiled by the Secretary of the Academy of Sciences. I think there is absolutely no connection between my scientific work and the facts of my personal life that were wrongly filed and does not deserve the attention of respected people. I strongly perturbed because you don’t share my opinion she wrote in response and in Stockholm for a well-deserved award, of course, came. I would Like to remind that the discovery of radium was made by Pierre Curie, together with me. We also owe to Pierre Curie for his fundamental experiments in the field of radioactivity. My own work was chemical studies on the allocation of radium, she said in her speech. the affair with Langevin, shortly after almost getting a divorce, the man came back to his wife but this happy marriage did not become. A few years later Paul twisted new novel. a scene from the movie “Marie Curie” But Maria since then completely focused only on science — she’s considered a kind of tribute, a tribute to his favorite pier. Broken destiny, I was not able to plan their future, however, could not forget that my husband used to say that even if it will not, I must continue my work, — so she wrote after the death of Pierre Curie. but the Parallels of this story with the day today, alas, is obvious. Novels of adult women with men younger than them are considered by the public worse than couples where the older man.
Last week, I responded to a question in the She Makes it Happen! – with Lara Young Group about how to stay on top of your negative thoughts and feelings. It was a fabulous question and I thought I would share Part A of the answer I gave with you here. But before I do, I’d like to share an ‘ah ha’ moment that I had at the weekend when I was struggling to cope with my three and a half year old’s tantrum. Practice what you preach While my daughter was flailing about, screaming and refusing to get dressed for the day on Sunday (Mother’s Day) these things were going on for me: a) I was thinking “Please no, not again”, “Oh for goodness sake, I’ve had enough of this” “Why can’t you just co-operate?” and after about 1/2 an hour “I’m not putting up with this *%!#! any more!” b) I was doing my best to remain calm, to speak slowly and reassuringly to my daughter whilst physically, my jaws were clenched and I could feel my stress levels rising along with a sick tide of helplessness at not knowing how to help her c) Feelings of sadness and guilt followed. I berated myself for not coping with the situation as well as I “should” then resentment made an appearance as I found myself telling myself “This isn’t fair, it’s Mother’s Day!” Then I remembered that I had a choice about how to respond. And for the remainder of the day I practiced a new way of thinking about her actions and their meaning. I sought to understand her. I consciously held positive thoughts in my mind and looked for opportunities to praise her, encourage and love her. I remembered that her behaviour is hers, and not mine. I affirmed “I am a loving and kind mother” and “All is well” Last night, I re-read my response to the question in the She Makes it Happen and I realised how beautifully it applies to my own situation. Perhaps you can apply it to your own life too? Here it is: 1. The first thing to recognise is that everything is temporary. How you feel, what your thoughts are and your physical well-being. 2. The second thing to hold true is that EVERYTHING is a CHOICE. We choose how we feel and how we respond and interact with the world. Your thoughts are far more powerful than you once imagined. The way we choose to think about people and situations literally changes our relationship with them AND our reactions to them AND our physical well-being too. You cannot think a negative thought and feel good emotionally and physically at the same time. It is impossible to think badly about a situation or about a person and experience positive emotions and physical well-being at the same time. Try saying to yourself “Nothing good ever happens to me, it’s not fair” How does that negative thought make you feel? What happens to your body – the sensations in your heart, stomach, chest when you say those words to yourself? Is the experience a positive or a negative one? My guess is that when you say those words, a negative emotion, such as anger, or resentment or disappointment follow. And that when you say those words, your body reacts with some negative physical sensation – perhaps a feeling in your tummy or a tightening of your shoulders. EVERYTHING is linked – beginning with your thoughts. And that’s why developing the practice of CHOOSING the words you say to yourself and to others is so important for your own well-being. My first tip therefore is to choose more positive thoughts. Now, this is a habit that can take time to develop. Most people don’t turn into Pollyanna overnight 🙂 BUT simply by taking the very first step of recognising the impact that your language has on yourself, you are on the way to creating positive change. Remember, as individuals we love being right. And so if we say to ourselves “Nothing good ever happens to me, it’s not fair” or any other negatively oriented statement – then our unconscious mind will do its very best to make it true by looking for EVIDENCE to support this truth. So instead, start by re-phrasing and re-purposing your language so it supports you positively. For example “I love noticing all the good things that happen in my life”. As you say those words to yourself, notice the changes that occur in how you feel emotionally and physically. Are they more positive? Play around with words until you get to the place where you are FEELING more positive. 3. Because your thoughts directly impact on your emotional and physical well-being make a conscious choice to FEEL GOOD NOW. Have you heard of the emotional scale? At one end are negative emotions like anger, fear, hurt, resentment and at the other end we find positive emotions like happiness, contentment and joy. The key is to take ACTION to move yourself up the emotional scale. Now it’s not about leaping from anger to joy or from apathy to passion, but it is about TAKING STEPS to move towards a more positive emotional state. Write down a list of things that you love doing, music that lifts you up, quotes that inspire you and when you are in a not so great place emotionally READ, WRITE or SING the words that make you FEEL GOOD NOW and then embrace the new emotion that arises. I use affirmations and physical movement (like a happy dance) to get me out of a funk when I’m in one. Sometimes, the simple act of getting up and making a cup of tea or phoning a friend or walking the dogs or playing with my children can make all the difference. I also use some Neuro Linguistic Programming (NLP) techniques to physically anchor positive emotions that I can release at any time. Whatever you do – THE KEY IS TO TAKE ACTION. So that ^^^^ was my response to the question in my group. And having re-read it and applied it to my own situation, I know that these tips work. Or at least, like all change, it’s a work in progress 🙂
Zippers haven’t changed much since they were first invented, and neither have the problems we all have with them. From stuck zippers to teeth that just won’t clinch, here’s how to fix all the problems you’ll run into with anything that zips. Your zipper is stuck When your zipper is stuck it feels like it’s getting caught on something. The zipper might not come down at all, and until you fix it you’re trapped inside your jacket. The best way to fix this? Grab a graphite pencil and rub the pencil tip on the teeth. Try it again and it should work. If that doesn’t work, it’s time to move on to a lubricant. Windex is good because it’s not oil-based, but you can also use bar soap or lip balm. Start with the zipper all the way up, and slowly apply the the lubricant to the teeth. Then inch the zipper down some more, reapply, and continue doing that until the zipper comes all the way down. This is especially handy to fix a zipper stuck in the fabric itself. The teeth don’t close (or they keep popping open) One of the most annoying problems with a zipper teeth that won’t close. The problem can result from a few different scenarios. And sometimes the above trick of using a pencil (or a bar of soap) will smooth out the teeth enough so they’ll work again. If it doesn’t, the slider might not be working properly. First, double-check to make sure a piece of cloth or thread isn’t stuck in the zipper. Next, look at the individual teeth. If any of them are sticking out, grab a pair of pliers and move them back into place so they’re all straight. If the teeth are straight and clean, take a look at the slider itself. Over time, the slider starts to come apart, and when that happens it stops clinching the zipper teeth together. Grab some pliers and try closing the slider together until it catches the teeth again. If your zipper is on a pair of jeans, the solution is a little more complicated. If you can, you need to remove the metal bumper at the bottom and replace it with a stitch, or just tie it off in the middle if teeth are missing at the bottom. Unfortunately, this only really works with pants that allow you access to the bottom bumper. If that fails, or you’re working with pants where you can’t get to the entire zipper run, you might need to replace the zipper completely. While you can do it yourself with some pliers, scissors, and thread, replacing the zipper on a pair of pants is only around $US5 ($7)-$US10 ($14) at most tailors. The zipper won’t stay up A common problem with pants zippers is a zipper that won’t stay up. This can lead to all types of embarrassing situations. Unfortunately, you can’t really fix this problem permanently unless you completely replace the zipper. That said, there are two simple temporary fixes. The easiest is to slide a key ring through the zipper pull and over your pants button. This keeps the zipper up in a simple way. If you’d prefer a little more flexibility, you can also try a rubber band. The slider broke off If the slider comes off completely, or if it’s not closing the teeth right, then you need to replace it. To get the slider off, use some pliers. Once that’s done, reattach the new zipper slider by sliding it back onto the teeth. That’s it, you’re done. Replacing the slider is usually pretty simple, and should take only a couple minutes of time. The zipper pull broke off When the pull breaks off a zipper, it makes it incredibly hard to zip the zipper up. The good news is that this is pretty much the easiest fix out there. You can turn a paperclip, a keyring, or even a telephone wire connector into a zipper pull. Just slide it through the tab on the slider et voila — you have a new zipper pull. Sure, it’s not exactly the most stylish solution, but at least you’ll be able to get in and out of your clothes.
The COVID19 seems to have escalated very quickly. For mothers, it can be very scary as we try to protect our littles. Many of you are completely quarantined. Others are out of work and stuck at home. And with schools closing for an extended Spring Break, you are probably wondering how you will keep your sanity for 2-6 weeks. If you are struggling, reach out to us on social media messaging. We can message you and encourage you and assure you that you are not alone! But in the meantime, here are 10 things you can do indoors with your kids during quarantine! 12 Museums That Offer Virtual Tours If you ware worried about your kids missing school, this is an awesome virtual field trip! See the world’s greatest exhibits all from your couch. History lesson time! You can do one museum a day! Check out Scholastic’s distance learning! They include different grade levels for each child! Even 30 minutes a day will help keep your kids up to speed with education. They have fun games and educational videos to keep your kids entertained. Also, check out for math lessons. Until April 30, parents can order the math curriculum for free so their kids can stay current in their studies. I saw this on social media and think it’s a great way for us to connect with our kids: "spend time teaching life skills! Cooking, cleaning, how to change the oil in a car, how to do laundry, sew a button (or a hem), balance a checkbook, etc… Not all learning is done in the classroom." These are the skills and moments kids will look back on one day and remember their parents teaching them. Learn a new dance. Youtube has millions of videos for any age or level. Proper dance is a lost art. What better time to learn the waltz, foxtrot, or hip-hop than during quarantine? (This is also great exercise and can get all your kid’s energy out of their systems since they are stuck indoors.) When I was a little girl, my mom would check out “Hank the Cowdog” cassettes from the library and the whole family would make popcorn and snuggle in and listen. We’d laugh all night long. Now with Audible, it’s easier than ever to listen to novels and kids books as a whole family. Even though the graphics aren’t there, this forces kids to use their imagination as the reader takes them to new worlds. If you are looking for a way to rest their eyes from a screen, this is an amazing option! Your first month is also free with Audible and then $14.99. You can cancel anytime. Craft (with Amazon) Amazon still delivers, guys! Candle making, soap making, paper mache, painting, bead-making, even soda brewing kits! If you can’t leave, that’s ok! Let Amazon bring it to you! Write Thank You Notes To First Responders Nurses and doctors put their lives and the lives of their families on the line every single day so our country can move forward. Nursing homes and assisted living centers are on quarantine and elderly people are locked in the rooms until further notice. Consider teaching your children a lesson about thankfulness and jot notes to first responders at your local hospitals and clinics or to lonely people in homes. Maybe use your newly found crafting skills to make bracelets or canldes as gifts to remind others they are thought of and prayed for. Write A Book With Your Family I actually saw this one in USA Today. Pick a character and each member writes a chapter about their adventures. Read aloud to each other. This sounds so fun! Check Out Google Earth This is a fun way to take a “virtual vacation.” It’s also a sneaky way for your kids to learn geography. Got to keep the education going until school starts, right? Checkout These Bakers--And Then Bake Use Amazon Fresh to deliver ingredients, and then go to town (figuratively, of course) learning new recipes! Learn anything from Asian food to Mexican to American. This list below is the internet's most popular cooks. Staying in is more comforting with comfort food. I hope this list inspires you to have fun, learn, and enjoy your time with loved ones during your quarantine! Remember, if you have not lost loved ones, you have so much to be thankful for! This is a wonderful opportunity to spend time with our families and make memories in a way we normally don’t have the time for. Our American lives are riddled with deadlines and stress and go-go-go. Take a moment to view this as an opportunity to rest, catch up, and be present.
Committed to Sustainability What Is Aquaculture? Aquaculture, also called “fish farming”, is the practice of breeding, raising, and harvesting aquatic organisms. This is most popularly seen in the food industry with animals such as salmon, tilapia, and bass. Some farms even grow seaweed! (Sushi anyone?) . But why do you care about growing a bunch of fish filets in a pond? Well, maybe you don’t, but aquaculture is so much more than just that! In the aquarium industry, aquaculture is well on the road to becoming the next big thing. While this practice has been happening on the freshwater side for years, with the popularity of marine aquariums lately, the industry is expanding quickly. Saltwater organisms such as clownfish, dottybacks, cardinalfish, tangs, and fire shrimp are already being bred regularly, and many more are popping up. Why is aquaculture important? While there are many well-managed wild fisheries, there are also many poorly managed ones. Problems such as overfishing, cyanide fishing, improper handling, and extensive travel time result in high losses, which over time causes wild populations to dwindle. Raising these animals in captivity helps take pressure off of wild populations. When fish can be provided to the aquarium industry without harvesting them from the wild, this gives the wild populations time to recover. How Can We Support Aquaculture? Something Fishy makes a huge effort to carry sustainably harvested fish – including aquacultured ones! We have partnered with organizations such as Mystic Aquarium and Roger Williams University (amongst others) to be able to provide these options to our clients. While sometimes these fish are not the least expensive option, it is worth it to support sustainable alternatives and make a positive impact on the environment along with getting a fish already accustomed to tank life which makes them very hearty. Something Fishy is committed to educating our clients, friends, and fans about sustainable practices in fishkeeping so we can all continue to enjoy Mother Nature's handy work up close.
Ridgid and Milwaukee both have a reputation for quality products and a history dating back to the 1920s. These two brands have competitive tools ranging from saws to impact drivers. However, the popularity of both companies’ available variety of products can make it difficult to decide which brand to purchase. The choice generally boils down to specific specifications you prefer and what your budget is. Here is a quick comparison list of Ridgid vs Milwaukee: - Both brands originated in the 1920s as American manufacturing companies in the Midwest. - Ridgid caters mostly to electrical and plumbing trades with some general-purpose tools. However, Milwaukee has a reputation for general-use tools for the construction industry and homeowners or novice tool users. - Although Emerson Electric is the parent company for Ridgid, both brands have power tools on the market produced by Techtronic Industries, which is Milwaukee’s parent company. - Milwaukee tools are more widely accepted and durable compared to Ridgid tools. - Most Milwaukee tools are electrical and powered by Lithium-Ion batteries, whereas Ridgid also offers many basic tools, such as wrenches and hammers. - Ridgid sticks to tradition with time-tested methods of producing tools. However, Milwaukee is more innovative, constantly updating and advancing its tool line. - Both brands provide good warranty options that vary depending on the product. However, Ridgid has a 3-year limited warranty for power tools, but Milwaukee offers a 5-year warranty. Now let’s learn some background information about the two brands and compare their best tools. While both brands have a good reputation for the tools they provide, I recommend going with Milwaukee for general use and homeowners. Their products are worth the investment, built to last, and offer great power and performance. You can also easily use them even as a novice or to teach your kids how to use power tools. Additionally, most Milwaukee tools are significantly lighter than Ridgid tools. When you need to use them for extended periods, the lighter weight creates less hand fatigue and makes the job easier overall. Having said that, if you want tools specifically designed for electrical or plumbing use, Ridgid has a wider variety of specialty tools for these purposes. Both brands guarantee the quality of their products and offer great warranty options to back it up, so all-in-all, you won’t go wrong with any choice specific to your preference. What to Consider When Choosing Between Ridgid and Milwaukee Before shopping for the next tool to add to your workshop, here are some factors to consider: 1. Product Warranty Both Ridgid and Milwaukee guarantee the quality of their products through the great warranties they provide. Most Milwaukee cordless power tools include a 5-year warranty, and some of their specialty tools have different warranties. For example, their RedStick levels have a limited lifetime warranty. Similar to Milwaukee, Rigid offers a range of warranties depending on the type of product. They cover their power tools with a 3-year limited warranty plus an optional Lifetime Service Agreement. Ridgid also offers a Full Lifetime Warranty on all their tools for material and workmanship defects. 2. Powering Technologies Both Ridgid and Milwaukee offer a wide range of products, including cordless tools using REDLITHIUM™ or OCTANE™ batteries. Battery life is a key feature contributing to the quality of a product. Milwaukee is known for its long-lasting batteries and also provides a warranty on battery products. Most smaller corded products for both brands are either 12V or 18V. They also have larger products with higher voltages, for example, 120 V for Ridgid’s Hole Cutter. Make sure you select a product with the power you require. 3. Tool Weight Heavy tools can cause hand fatigue and be difficult to use for extended periods. Compared to Milwaukee, Ridgid tools such as their drills are bulky and heavy. Milwaukee tools are generally lighter, sleeker, and more comfortable to hold. Ridge Tool Company An American manufacturing company, the Ridge Tool Company, makes hand tools under the brand name Ridgid. Its long history dates back to 1923, when the company started by inventing the pipe wrench. It became a wholly-owned subsidiary of Emerson Electric in 1966. Today, they sell over 300 different types of tools geared toward professional use. The trades they cater to include construction, plumbing, pipe fitting, and HVAC. They focus on providing quality products at a price that’s affordable to the masses. The company’s wet/dry vacs are produced by their parent company, Emerson. Additionally, Techtronic Industries (TTI), Milwaukee’s parent company, has a licensing agreement with Emerson to produce Ridgid power tools at stores such as Home Depot. The Ridge Tool Company also continues to invent tools and has multiple patents in the United States. The list of over 40 patented products includes pipe cutters, hole saws, roll groovers, and batteries. Both Ridgid and Milwaukee offer a wide range of tools. However, Rigid aims most of its tools towards professionals in specific trades, such as plumbing and electrical. Here are a few examples of the numerous popular Ridgid products. - Ridgid Wrenches The Ridge Tool Company originated from the invention of the pipe wrench and maintained a reputation for providing various wrenches. Today, you can purchase offset pipe wrenches, end pipe wrenches, hex wrenches, chain wrenches, spud wrenches, and more. - Ridgid Impact Drivers Rigid has multiple brushless 18V impact drivers, including 3-speed or 6-mode drivers for added versatility. Most of these tools are powerful, with over 2,000 inch-pounds of torque. For example, the 18-Volt OCTANE Brushless Cordless 6-Mode 1/4-inch Impact Driver has an industry-leading 2,400 inch-pounds of torque. - Hole-Cutting Tools Specializing in products for specific trades such as plumbing, Rigid offers hole-cutting tools, including hole saws, core drill bits, and hole arbors. Their Ridgid 76777 Hole Cutter can cut holes up to 3 inches in diameter into steel pipes. - Ridgid Circular Saws Ridgid has cordless circular saws and ones with cords 6 to 10 feet long. Available products range from $50 to $400, and some feature cooling technology for the motor. For a product with the maximum cutting depth, you’ll want the Ridgid OEM R8652 Gen5X 18V. It’s durable and versatile, with an angle range from 0 to 56 degrees. There’s also an attached dust blower to clear your path when cutting. Why You Should Buy Ridgid Products Are you considering purchasing a Ridgid tool? Here are reasons why you should buy them. - Affordable Tools Although a reliable brand that guarantees the quality of its products, Ridgid offers a wider range of affordable tools suitable for various budgets. - Great Warranty Ridgid ensures a lifetime warranty on material and workmanship defects and generally has 3-year limited warranties for most of their products. In addition, some tools, such as the GEN5X line, also include a lifetime service agreement. Milwaukee Electric Tool Corporation The Milwaukee Electric Tool Corporation’s history also dates back to the 1920s, when the company started in 1924 with the launch of their quarter-inch capacity drill, the Hole Shooter. They expanded to provide manufacturing tools for the U.S. Navy in the 1930s and through World War II. The company introduced their Sawzall reciprocating saw in 1951, and in 2005, they initiated using lithium-ion batteries in power tools. Among other products, Milwaukee has several patents for battery chargers and Lithium-battery packs. In 2005, Milwaukee also started operating as a subsidiary of the Hong Kong company Techtronic Industries (TTI). Since 2019, Milwaukee shifted its focus from producing affordable tools for the masses to concentrate on elite and expensive power tools. Today, they offer products of over 500 tools and over 3,500 accessories. They conduct a lot of research and development to incorporate new technologies into their tools continuously. For example, Milwaukee One-Key contributed significantly to Bluetooth tool technology. Similar to Ridgid, Milwaukee offers quality professional-grade tools. They also have products suitable for general use, homeowners, and novice tool users. Here are some of their popular options. - Milwaukee Table Saw Milwaukee offers multiple table saws in their Fuel M18 family. Their cordless 2736-21HD Table Saw is also notable. It has a speed of 300 RPM and has Redlink Plus™ intelligence for performance optimization and overload protection. Impact drivers developed by Milwaukee are versatile, durable, powerful, and technology-inspired. They make both cordless and corded impact drills that are lightweight and compactly designed. Milwaukee has incorporated their new Fluid Drive Hydraulic Power into the M18 Fuel Surge 1/4-inch Hex Hydraulic Impact Driver. As a result, it’s one of the quietest cordless fasteners on the market. It also packs 3000 RPM of torque. - Milwaukee Drills Milwaukee has cordless and corded drills in their M12 and M18 line of products. Their drills are popular due to the higher torque and wider speed range compared to competitors. These drills are also constantly improving as Milwaukee takes its customer surveys and feedback seriously. - Milwaukee Multi-Use Products Milwaukee’s commitment to utilizing the latest technologies means they have multi-use products that can cross-communicate over multiple platforms. For example, the 2626-20 M18 Orbiting Multi-Tool has wood cutting and sanding abilities. It also features 12-speed settings ranging from 11,000 to 18,000 OPM. Why You Should Buy Milwaukee Products If you are considering purchasing a Milwaukee tool, here are the reasons why you should buy them. - Long-Lasting Products Milwaukee tools have a reputation for being heavy-duty and durable. Even though they are more expensive, you can expect longer battery charges and a longer-lasting tool. - Updated Technology Focusing on research and development, Milwaukee always aims to update its tools using the latest technology available. As a result, they have quality tools with lightweight designs. - Great Power and Performance Milwaukee tools offer greater power and performance compared to competitors. Rest assured, you can complete jobs requiring high-torque applications efficiently. Both Ridgid and Milwaukee offer high-quality and professional tools. They are reputable brands with histories dating back to the 1920s. While Ridgid focuses on tools for professional trades, Milwaukee also has options suitable for first-time tool users. Before deciding which brand you want to purchase, consider what specifications you are looking for and your budget for the investment. These two aspects combined with the information in this article will help you make an informed decision. - Are Milwaukee Batteries Better Than Ridgid’s? Yes, generally, Milwaukee batteries are known to be longer-lasting than Rigid batteries. Most Milwaukee tools use Lithium-Ion batteries, with their M18 line using REDLITHIUM™ batteries. This line of batteries offers more work over its lifetime as well as overload protection, cell monitoring, and temperature management. For Ridgid tools that require batteries, it’s best to select their OCTANE™ line of batteries for optimal performance. However, most products don’t come with the battery included, and Ridgid battery prices can be costly. - Is Milwaukee More Expensive Than Ridgid? Ridgid products are generally easier on the budget than Milwaukee’s. Ridgid aims to provide tools affordable for the masses to use in specific professional trades. However, there are some products at high price points. Milwaukee has recently shifted its focus to producing elite, high-quality, expensive tools for its loyal customers. However, the durability and long-lasting features of Milwaukee products still make them worth the investment.
Salvage and Wreck We are frequently asked questions about the law of salvage and wrecks. These questions are usually something like "If I find a vessel adrift or in trouble and put a line on it or otherwise save it, is it mine?" or "If something washes up on the beach, can I keep it?" or "If I find lost treasure (or a shipwreck or an airplane hull), is it mine?". The answer in all cases is no. Generally, one does not become the owner of maritime property merely because the person saves it or finds it. If a person renders assistance to a vessel in distress and saves it, they are entitled to a salvage award, not an ownership interest. If a person finds lost or abandoned maritime property, they are required to report it to the Receiver of Wrecks and they are entitled to a salvage award. Both the law of salvage and the law of wrecks are governed extensively by statute and regulations. The most relevant statutes are the Canada Shipping Act, 2001 ("CSA, 2001") and the Navigation Protection Act ("NPA"). The Marine Liability Act ("MLA") is also relevant, but less so. Salvage is one of the ancient maritime laws. It is concerned with the saving of life or property at sea. Generally,when maritime property is in danger and a volunteer successfully saves the property, the volunteer is entitled to a salvage award that is determined by the courts. The three necessary elements to a salvage award are: danger; voluntary assistance; and success. If the property is not in danger, no salvage award is payable. If the person is not a volunteer (i.e. is under some legal obligation to render assistance), salvage is not payable. (But note that pursuant to s. 147 of the CSA, 2001, assisting a person at sea, responding to distress signals or following the directions of a rescue coordinator does not disentitle a person to a salvage award.) If the salvage attempt is not successful, no salvage award is payable ("no cure no pay"). The amount of the salvage award is dependent on a number of factors including: the value of the saved property; the extent of the danger; the time required and the expenses and losses of the salvor; the skill of the salvor; and the risks and liabilities avoided by reason of the salvage (i.e. pollution). Although there is no necessity or requirement that there be a contract between the salvor and the owner of the property in danger, most large scale commercial salvage operations today are done under a salvage contract. The most well known, if not the most common, of these is the Lloyd’s Open Form Salvage Agreement, which includes a "no cure no pay" provision as well as arbitration to determine the salvage award. Such contracts are valid but, pursuant to art. 7 of the Salvage Convention (see below) are subject to annulment if entered into under undue influence or the influence of danger or if the payment is excessive for the services rendered. Priority of Salvage Awards Salvage has traditionally been considered to be a "maritime lien", which means that a claim for salvage is not defeated by a change in the ownership of the salved property and that the claim has a fairly high priority in the event of a priorities dispute. Pursuant to s. 86(4) of the CSA 2001, a lien arising from a claim for salvage has priority over all claims except costs relating to the arrest and sale of the vessel. The limitation period for a salvage claim is two years from the date the salvage services were rendered but this period may be extended to the extent and on such conditions as the court deems fit. (CSA 2001, s.145, and Art. 23 of the Salvage Convention) Limitation of Liability Salvors are entitled to limit their liability pursuant to article 2 of the Convention on Limitation of Liability for Maritime Claims, 1976, as amended by the Protocol of 1996.(the "LLMC"; Schedule 1 to the MLA) Claims for salvage are, however, exempted from limitation pursuant to art. 3 of the convention.(Schedule 1 to the MLA) The limits of liability for any salvor not operating from any ship or for any salvor operating solely on the ship to or in respect of which he is rendering salvage services, shall be calculated according to a tonnage of 1,500 tons. (Art. 6 r. 4 of the LLMC) International Convention on Salvage, 1989 Salvage law is predominantly governed by the CSA 2001 which, pursuant to s. 142 implements the International Convention on Salvage, 1989 (Schedule 3 to the CSA 2001). The Salvage Convention applies to both salvage operations at sea and in inland waters. (Canada could have declared the convention did not apply to inland waters under article 30 but did not do so in its reservations, which are recorded in Part 2 of Schedule 3 to the CSA 2001.) In summary, the main provisions of the Salvage Convention are: - Art. 4 – State owned vessels are exempted from the Salvage Convention; - Art 6 – The convention applies to all salvage operations unless a contract specifically provides otherwise but the contract is subject to annulment if entered into under undue influence or the influence of danger or if the payment is excessive for the services rendered (Art. 7). - Art. 8 – The salvor has the duty to carry out the operations with due care and to prevent or minimize damage to the environment. The owner has the duty to cooperate with the salvor and to similarly prevent or minimize damage to the environment; - Art. 10 – Ship’s Masters are obliged to render assistance to any person in danger of being lost so far as can be done without serious danger to his vessel; - Art. 12 – Salvage operations which have a "useful result" give a right to an award; - Art 13 – The award shall not exceed the salved value but is to be fixed with a view to encouraging salvage operations and shall take into account: the salved value; the skill and effort to minimize damage to the environment; the measure of success; the danger involved; the skill and efforts of the salvor; the time, expenses and losses of the salvor; the risks run by the salvors; the promptness of the salvage; the vessels and equipment used; and the state of readiness and efficiency of the salvor; - Art. 14 – Where the vessel threatened damage to the environment and the salvor prevented or minimized damage to the environment, he is entitled to special compensation if the award under art. 13 is not sufficient; - Art. 18 – A salvor may be deprived of all or part of his award if guilty of fault, neglect, fraud or dishonest conduct; - Art. 19 – A salvor is not entitled to an award where the owner expressly and reasonably prohibited the salvage operation; - Art. 21 – Upon the request of the salvor the persons liable to pay an award shall provide security including for interest and costs and the salved property shall not be removed from the port or place to which is first taken until the security is posted; - Art. 23 – An action for payment of a salvage claim shall be brought within two years from the day on which the salvage operations terminated. Historical and Cultural Property Maritime property that is considered of historical or cultural significance may not be subject to salvage. In Part 2 of Schedule 3 to the CSA 2001, the Government of Canada specifically reserved the right not to apply the Salvage Convention "when the property involved is maritime cultural property of prehistoric, archaeological or historic interest and is situated on the seabed". To date, we are not aware of any case in which such a designation has been specifically made, however, we are aware that the Government of Canada is actively looking at this topic. Moreover, there are a number of provincial statutes that purport to apply to maritime ship wrecks that prohibit disturbing the wreck in any way. (See for example the Heritage Conservation Act of British Columbia which declares any ship (or airplane) wreck a heritage object after only two years.) Whether and/or to what extent these provincial acts are constitutionally applicable to ship wrecks is a matter of debate but, as most of them make it an offence to breach their provisions, it would be wise to seek professional advice before disturbing any property found on the sea bed. The law of "wrecks" is related to but different from salvage. It concerns derelict (i.e. abandoned) vessels, wrecked vessels, stranded vessels, vessels in distress and any other property that is found floating (flotsam) or washed ashore (jetsam) or on the bed of the sea (lagan). "Wreck" is defined in s. 153 of the CSA 2011 as including: (a) jetsam, flotsam, lagan and derelict and any other thing that was part of or was on a vessel wrecked, stranded or in distress; and (b) aircraft wrecked in waters and anything that was part of or was on an aircraft wrecked, stranded or in distress in waters. The finder of a wreck is not entitled to retain the wreck or any part thereof or any lost cargo. To the contrary, the finder is required to report the find to the Receiver of Wrecks, an office created under the provisions of the CSA 2001, and must deal with the wreck as directed by the Receiver.The finder is, however, entitled to a salvage award which is determined by the Receiver but which cannot exceed the value of the wreck. The law relating to wrecks is addressed in Part 7 of the CSA 2001 which provides: - Any person who finds, takes possession of or brings a wreck into Canada, the owner of which is not known, shall report it to the Receiver of Wrecks (an office designated under the act) and take whatever measures the Receiver directs including deliver it to the Receiver (s.155); - The person who finds the wreck etc. is entitled to a salvage award determined by the Receiver which may be the wreck, part of the wreck or all or part of the proceeds upon the disposition of the wreck (s.156) but the award cannot exceed the value of the wreck (s. 159); - Every person is prohibited from possessing, concealing or disposing of a wreck that has not been reported (s. 157); - The Receiver must release the wreck or the proceeds of its disposition to the owner provided the owner: (1) submits a claim within 90 days of when the wreck was reported; (2) establishes their claim to ownership; and (3) pays the salvage award and the Receiver’s fees and expenses.(s.158); - The Receiver’s decisions respecting the right of ownership or the salvage award can be appealed to a court but the salvage award of the court shall not exceed the value of the wreck (s. 159); - The Receiver may sell or destroy the wreck: (a) after 90 days if the value of the property is over $5,000; or (b) at any time if, (i) the value is less than $5,000, (ii) the storage costs would likely exceed the value, or (iii) the wreck is perishable or poses a threat to health or safety (s. 160). If the wreck is sold and the owner does not make a claim within 90 days, the proceeds less the salvage award and the Receiver’s expenses are paid into the Consolidated Revenue Fund (i.e. they go to the Crown) . (s.160); and - If the owner has established ownership but not paid the salvage award within 30 days, the Receiver may dispose of the wreck and pay the owner the balance after deducting the unpaid salvage award and fees and expenses. (s.161) It should be noted that the above provisions apply only where the owner of the wreck is not known by the finder. If the owner is known, the finder is only entitled to claim a salvage award unless the finder can prove the owner has abandoned the property. This requires that the finder commence court proceedings for a declaration the owner has abandoned the property and that the finder has title. (See, for example, the unreported decision in All Tow Boat Moving Ltd. v Lovdahl et al. (FCTD) T-2085-14 (2015-03-02), a copy of which can be found here). One would think that this would be a rare occurrence since, if the wreck has value, the owner will not abandon it. Additionally, a declaration of title would make the finder liable for any pollution emanating from the wreck and possibly for wreck removal expenses. A vessel, or part of one, that is wrecked, sunk, partially sunk lying ashore or grounded is an obstruction within the meaning of the Navigation Protection Act (the "NPA") and is subject to the provisions of that act. In summary, such vessels must be removed by the owner or person in charge, and if they fail to do so, the Crown will do so at the owner’s expense. More specifically, the provisions provide: - the person in charge of the obstruction (i.e. the owner) shall notify the Minister of the obstruction, maintain a sufficient signal and light to indicate the position and shall immediately remove the obstruction. If the owner fails to remove it the Minister may do so. (s.15); - If the Minister is required to remove the wreck the costs incurred are a debt owing to the Crown recoverable from the person in charge of the wreck or any person through whose act or fault the obstruction was occasioned or continued (s.18); - If any vessel is wrecked, sunk, partially sunk, lying ashore, grounded or abandoned in any navigable water, the Minister may authorize any person to take possession of and remove the vessel, part of the vessel or thing for that person’s own benefit, on that person’s giving to the registered owner or other owner of the vessel or to the owner of the thing, if known, one month’s notice or, if the registered owner or other owner of the vessel or owner of the thing is not known, public notice for the same period in a publication specified by the Minister (s.20). Limitation of Liability Under Canadian law there is no right to limit liability in respect of claims for wreck removal. Pursuant to Article 18 of the Convention on Limitation of Liability for Maritime Claims, 1976, as amended by the Protocol of 1996 and Part 3 of Schedule 1 to the MLA, Canada has exempted from limitation of liability "Claims in respect of the raising, removal, destruction or rendering harmless of a ship that is sunk, wrecked, stranded or abandoned, including anything that is or has been on board that ship". Nairobi International Convention on the Removal of Wrecks The Nairobi International Convention on the Removal of Wrecks, 2007 is a brand new convention that comes into force on 14 April 2015. The convention applies to "wrecks" in the territorial waters of signatory states but can be extended to include internal waters. Canada is not a party to the convention but Transport Canada has recommended that Canada accede to the convention and apply it to Canada’s internal waters and territorial sea in two separate discussion papers. The 2010 Transport Canada discussion paper on this can be found here. In July 2015 Transport Canada issued a second discussion paper with specifics of what is proposed to be implemented in Canada. The 2015 discussion paper can be found here. In summary, the convention requires that a "wreck" be reported and, if it is determined the "wreck" is a hazard to navigation or the environment, the wreck must be removed. The convention requires owners to have compulsory insurance to cover the costs of wreck removal and provides for direct action against the insurers. The owner of the "wreck" is responsible for the costs of removal unless the owner proves the wreck: - resulted from an act of war or similar hostilities; or - resulted from a natural phenomenon of an exceptional, inevitable and irresistible character; or - was wholly caused by acts or omissions by third parties done with intent to cause damage; or - was wholly caused by the negligence or other wrongful act of any Government or authority responsible for the maintenance of navigational aids. The convention permits the owner to limit liability under any applicable national law or convention but, as indicated above, Canada and many other countries have exempted wreck removal claims from limitation of liability.
[I’ve created this page to provide a brief summary of the image analysis issues related to our studies of gut microbial communities, mainly for students interested in computational projects. – Raghuveer Parthasarathy, Jan. 12, 2015] Image Analysis and Machine Learning, in the context of visualizing gut microbial communities Each of our bodies is a home for trillions of microbes, mostly resident in our digestive tract, whose roles in health and physiology are only beginning to be understood. The spatial structure and temporal dynamics of the microbial communities associated with humans and all animals are still largely mysterious, spurring my lab to develop new microscopy-based approaches to explore microbial colonization in zebrafish, a model organism. (See here for a blurb on our recent work on physical models of bacterial growth, with a pointer to a paper.) Here’s one three-dimensional image of bacteria (red) and immune cells (green) in the gut of a live, larval zebrafish: Note that the bacteria exist as free individuals as well as dense clusters. Here’s a video taken over several hours. Each frame is a projection of a 3D image (a stack of 2D images), in which we’re just looking at bacteria engineered to express fluorescent proteins: Each series like this contains hundreds of gigabytes of image data. Converting the images into useful data — the number of microbes and their spatial location — is a computational challenge! In addition to the size of the datasets, here are three key issues, partially illustrated by one “zoomed in ” figure. 1. There is a large fluorescent background, in addition to the signal from the bacteria. We generally deal with this by various types of adaptive thresholding, which work well. 2. We need to separate (“segment”) the gut interior from the exterior, which I’ve just roughly done here in green. This largely involves manual labor, and would be great to automate! 3. We’re very interested in classifying objects as individual bacteria or as clusters, in addition to classifying bacterial vs. zebrafish (“host”) cells. We’ve adopted machine learning methods for this, using support vector machines. Are our methods optimal? There is likely room for improvement! Each of these issues could form the nucleus of an interesting project for a computer science student, and moreover one that is useful and that could lead to a research position! I’ll also note that we have a considerable amount of data, and a lot of manually curated “ground truth” sets, allowing assessment of various computational approaches.
All Hail Holy Name: The Emigration of Souls By Sheldon Firem The “present’ is that infinitesimally small half-second of reality we are briefly aware of as we emigrate to the next half-second of reality. On either side of the present are two infinite, temporal worlds, the past and the future. These are countries from which we leave as emigrants and enter as immigrants, respectively. We are never truly a permanent resident of the present; the present is but a fleeting way station for our body, mind, and spirit as we irresistibly depart the past and enter the future. Our memory is the existential glue that interprets and unifies this emigration and immigration of the soul. This time travel requires guides. The guides are the people we encounter along the way. I found memorable guides at Holy Name. “All Hail Holy Name!” I grew aware of the transformative nature of time travel when Holy Name High School was located near the intersection of Broadway and Harvard in Cleveland, Ohio. Holy Name Parish, the Gallagher Building, and the Carroll Building were the tangible spiritual home of Irish immigrants, founded in 1864 as a parish with an elementary school, that later added a high school, in 1914. The Holy Name High School community now continues its successful mission in Parma Heights (1978). Their colors are green and white. The “Green Wave” is its emblem. “The Schools the Thing” is its motto. I entered Holy Name in 1962. My mother paid somewhat less than $200 for tuition each year. Each class level had about 200 students. Surprising the teacher’s lunchroom pool, I graduated in 1966. I thought I was merely going to school, a continuance of an elementary education begun at Holy Family Parish on East 131st Street and St. Mary of Czestochowa Parish on East 141st Street in Cleveland. What actually happened was that I was transformed into an emigrant soul through the education I received from the lay educators, and Sisters of Charity of Cincinnati. High school students can be reluctant time travelers, sometimes refusing to leave their comfortable childhood cocoons, sometimes fixated on the “kicks and bangs and thrills” of the adolescent’s present, sometimes projecting themselves into a future of undefined hopes. This is precisely the juncture where emigration guides are needed. This is precisely when the educator-guides at Holy Name invited me to time travel. Who were these educator-guides? While many names are recalled, some examples will illustrate this guided time travel. Mr. Emil Maras, English teacher: Mr. Maras led students to literature, offering them classics like The Odyssey; he required a weekly essay; he passed out Hoar Hounds while we read silently every Friday. Above all, he led by example, regaling us with an occasional, personal war story, giving pep talks about living fully (the “triple threat” of mind, body and spirit) and privately saying the rosary in church. Emil Maras guided us with logos, the word. Sister Jeanne Pierre, French Teacher: Sister Jeanne Pierre taught me French for four years. Most students took Latin. Sister not only taught French, but she taught that there was a broader cultural world about which we were ignorant. Her diminutive stature was superseded by her creativity, as she used vinyl “records” to instruct the class. One of the French essays we translated even involved the making of Beaujolais wine. Sister Jeanne Pierre guided us with eyes to peer over the cultural horizon. Father George Eppley, Principal: Father Eppley directed the administration of Holy Name, but his guidance shown brightest in the monthly Friday mass he conducted for students and staff. His homilies were rooted in scripture but driven by social justice themes, as he used President John Kennedy, current events and a challenge to students to make a difference in the world. Father Eppley (who later became Mr. Eppley and wrote guest op-eds for the Plain Dealer) guided us with the challenge of social justice through action. Mr. Robert Gale, Business Law Teacher: Mr. Gale taught law classes to students who thought, as many students still believe, that things have to be “fair.” Well, Robert Gale taught that “fair” is a nice concept, but that a well-reasoned argument, preparedness, facts, and the law are essential to attain justice. Robert Gale guided us with the logic of the law. Ms. Jean Sperling, History Teacher: Ms. Sperling presented history to high school students whose personal history began in 1948 and was then reaching its zenith in the early to mid-1960s. We were humbled to learn that the people of the past lived and died and mattered. Ms. Sperling also unobtrusively infused the precepts of the Catholic Church into history class in a liberal/enlightenment manner, with which Jefferson, Augustine and Luther would have agreed. Jean Sperling guided histrionic adolescents into historians. These time travel guides of Holy Name High School positively transformed my emigration from the past and immigration into the future. We rarely if ever can emigrate alone. The journey is not assured. We may resist guidance or the guides we encounter may not be true guides or fate steps in to thwart our transformational journey. Most of the immigrants were tragically lost. The shore was strewn with wreckage and bodies. Thoreau states “…they were within a mile of its shores; but, before they could reach it, they emigrated to a newer world than Columbus dreamed of ….” We are never truly a permanent resident of the present; the present is but a fleeting way station for our mind, body and spirit as we irresistibly depart the past and enter the future. As Danu, the ancient Irish goddess of wisdom, pointed the way for her charges, the educators of the Green Wave illuminated signposts for their students, for their emigration of souls.
Are you thinking about adding a garage to your home? If so, there are lots of questions milling around in your head right now. The main one is, do I really need to do this? Will I actually benefit from adding a garage to my home? The answer to those two questions obviously varies from person to person, but the general consensus is that everyone can benefit from adding a garage to their property. The pros certainly outweigh any cons, and we’ll actually talk about both of these in a section within this guide. What Is A Garage? A garage is technically a building that is used to keep cars and other motor vehicles. However, modern-day definitions of a garage can be a bit than that. People often see them as an extended piece of your home that is built for a separate purpose. It could still be home to your cars or motorbikes, but garages can also be used as the following: - Home offices - Storage areas The list goes on, though it is quite important to add that most people will convert an existing garage to one of these things. If you wanted any of the above, you would just pay for an extension and create the rooms, rather than creating a garage. So, for the purpose of today’s post, we are talking about adding a traditional garage to your home. It will be used as a haven for your cars and other vehicles, but it can still have other uses — such as storage for garden tools, and so on. What Are The Advantages Of Adding A Garage To Your Home? Why should you consider adding a garage to your home? What sort of benefits are on the horizon after an addition like this? Believe it or not, but adding this simple space to your property can be highly advantageous. In fact, most people will see the following benefits: - Save money on car insurance — yes, a strange place to start, but this is one of the underlying benefits of building a garage. Keeping your cars locked away in a safe place will mean the risk of them getting stolen or damaged is lower than usual. Therefore, insurance companies charge you less on your premiums, because of the reduced risk. So, it becomes an investment that literally keeps saving you money year after year! - Increase your property value — garages are highly sought after, partly due to the point above! People know they can protect their cars with a garage, so it is something they look for. Plus, a garage provides extra space and could be converted, which is another thing people enjoy. Hence your property value will increase! - Gain extra space — as just mentioned, garages provide you with some extra space. While a lot of it could be taken up by your car, you can use the rest for other means. Perhaps you can bring storage boxes down from the loft and place them here, meaning you have an empty loft to convert! Or, you could keep your washer and dryer in the garage, freeing up space in another area of your home. - Improve your curb appeal — adding a garage to your home can make your property look a lot nicer. Generally, it will make your house seem bigger, which makes it more attractive and impressive. There is just something about seeing a nice garage next to a home that makes it more appealing. What Are The Disadvantages Of Adding A Garage To Your Home? As alluded to in the introduction, there aren’t many disadvantages to this idea. In fact, there are only three, and in most cases, the pros definitely outweigh the cons! - Can be expensive — like all home improvements, you need to take the costs into account. How much will this cost? You can reduce the costs by making the garage smaller or picking different doors, but any extension like this will cost money, and it might be too much for you right now. - Potentially takes up exterior space — where will your garage fit? If you don’t have empty space to the side of your house, you may need to dig up part of your yard to fit it there. This means reducing your exterior space, which some might not like. - An extra place to maintain — lastly, a garage requires maintenance, meaning you have another place to maintain alongside your house. What Are The Top Considerations When Adding A Garage To Your Home? If you are going ahead with the garage build, what do you need to be concerned about? Here are some of the most critical things to be aware of: - The garage size — clearly, you need to consider how big your garage will be, and it needs to at least accommodate your car, or it will be pointless. - The garage doors — did you know there are so many different types of garage doors available? It is worth reading a guide on buying garage doors to understand the options available and their benefits/drawbacks. Ideally, you want doors that open easily, will not break quickly, provide ample security, but also take up as little space as possible. - The garage flooring — you need to pick a flooring option that can withstand heavy loads and deal with a car driving in and out of it. Concrete suffices, but other ideas, like epoxy flooring, should be considered. Once you have these three elements sorted out, you pretty much have a garage design ready to go. From here, it is a case of choosing how the interior looks and adding any extras — such as storage shelving or a worktop. All in all, adding a garage to your home is largely beneficial. Provided you can afford it, have the space, and are able to maintain it, this is a home improvement project that’s well worth considering.
A blog post, and a recent visit to my alma mater, UVM, has me thinking again about public art, and contexts. My question is this: Can a spectacular painting be ruined by the wrong frame? Probably. I wonder how many paintings I’ve seen appraised on Antiques Roadshow, where the appraiser raves about the painting, then looks down their nose saying, of course, the frame needs to go. So why would this be any different for sculpture, or for outdoor art? The blog post is Lovely Filth, by Douglas Perkins, on the Middlebury Art Museum blog. He writes of Solid State Change, a challenging piece by Deborah Fischer located outside of the Hillcrest Environmental Center. Others have written (and commented, don’t skip those) on the scultpure on his blog, so I won’t rehash. I won’t even give my opinions on the piece itself, as I’ve amply proved in the past I’m no art critic. I do, however, wonder if part of the perceived problems with the art comes from poor context. The winter meeting of the Green Works, the Vermont Nursery and Landscape Association, brought me to the University of Vermont for what may have been the first time in at least 20 years. The campus looked great-I had a little bus man’s holiday walking the grounds, mentally comparing campuses. I walked toward my first dorm, Buckham, one of the ‘shoeboxes’, when I saw part of ‘Lamentations’ through the cold mist of the day. “Lamentations Group 1989” is by Judith Brown, 1931-1992, and was donated to UVM in 1993. Only 2 of the original 5 ladies are still outside, the other three are awaiting restoration (and funding). (side note: thanks to CAPP, Middlebury will never have to face this, we owe our trustees a big debt in setting up the art fund). What I like best about the piece, though, is the context. Situated behind the Fleming Museum, the statues appear to be walking through the grove of Honey Locust trees planted by Dan Kiley, the famous Vermont landscape architect, and matches a grove planted at the Cathedral of Immaculate Conception down the hill by the waterfront. The honeylocust are grown in close quarters, and their dappled shade and contorted branches give a perfect setting for the spectral wraiths as they seemingly float through the bleak grove. The context of the landscape matches and enhances the art, like a good installation in a gallery, but more dynamic, changing with the light, seasons, years. My favorite CAPP piece at Middlebury is Hieroglyphics for the Ear, 1997, by Kate Owen. This piece is located in the woods along the path on the way to Nichols House, home to the faculty heads of Atwater commons. The base of the piece lies in shade plants, such as vinca and lungwort, and help transition the work from the woods to the gravel path. The metal and stone blend with the site, but stand out enough to be noticed, and the engraved text almost echoes in the woods. I doubt the piece would resonate as well outside of this ‘frame’, like if it were in the center of an expansive lawn. Another piece with a good context is Frisbee Dog, by Patrick Villiers Farrow, 1989. This work is on the edge of the main quad, underneath a large elm tree behind Munroe Hall. Here the context plays to the sculpture, matching the students out often playing frisbee in the quad. If it were placed in a more subdued setting, in amongst other works, the dynamism would be lost, the dog looking misplaced. A piece that originally suffered context issues is the Garden of the Seasons, Michael Singer, 2003-2004. Observers watching this area of campus have probably seen the landscape surrounding transformed several times over the years. Sometimes it is difficult picking a frame for a painting. The sculpture/garden lies a third of the way down a rain garden ditch that treats storm water from most of the library quad, and is situated under another elm tree, one of our better specimens. The swale was planted in wildflowers and grasses, specifically to treat the drainage and prevent storm water from entering the greater Middlebury area. This swale, however, was surrounded by mown lawn, and backed by the large southern facade of the main library. In short, something was ‘off’, and an acre of wildflowers were planted around the swale, to help contextualize the sculpture and the swale together. Wildflower plantings, though, have a limited lifespan, and can quickly look ‘weedy’, a problem compounded by the location in the center of the quad. The facade of the library was working against the planting as well-the weeds and wildflowers all in one horizontal plane, matching the lines of the windows above on the library. The ditch is ecologically very important, and needs to remain planted for storm water treatment, and needs to stay strongly diverse for important wildlife and bird habitat in a section of campus lacking such accessible space. We planted the ditch to the east of the sculpture in large swaths of native shrubs, such as Winterberry, Dogwood, Witchhazel, and Redbud. A couple of these varieties are matched in the Garden of the Seasons, tying the piece into the greater landscape. The mistake I made was in planting smallish shrubs. When mature, the swale will be transformed into a mixed height shrub border, breaking the strongly horizontal lines of the library. Now, however, all the shrubs are barely poking through the wildflowers and weeds of the ditch, so some imagination is still required to achieve the effect. Patience, grasshopper. When mature, though, the Garden of the Seasons will find it’s proper context, and be perfectly ‘framed’ in the landscape. I love Hillcrest as as building. Originally student housing modeled after a Victorian Farmhouse, the building has been transformed several times, and has now been fully restored and serves as the Environmental Center for the college. The inside retains its farmhouse charm, but unlike most I’ve been in, including parts of ours, is light, airy, and spacious. The modern touches inside and out, such as solar panels, play and blend with the old remaining Victorian touches. Solid State Change (called tirerrhea by the students) lies alongside the building on the south side, against a stainless steel wall that I believe acts as a heat sink for the building. To further illuminate the context of the sculpture, to the south is the giant deck of Proctor hall, all gray stone, to the west is parking lot, Hillcrest Road, and more parking lot, and to the east is three in ground propane tanks, lids above ground, a sidewalk, and Hepburn Road. It’s a challenging site, almost industrial in feel, and I think works against the piece. The intent of the tires was to mimic the natural geology of the Champlain Vally, and while we can debate about the look of the tires versus actual dolomite, the setting is not helping. Does the stainless steel wall accentuate the recycled tires, rather then making one think of bedrock? Do the black tires help draw your eye towards the myriad of roads and parking lots? Is the space large enough for the sculpture? I pass by a barn on the way to Middlebury daily. Recently restored, the foundation sits upon a panton stone ledge, organically growing from the site. click for larger view I’m pretty particular about using stone in the landscape, either in walls, walks, or ledges. Maybe I’m over sensitive, but I think stone should be local, echoing the greater area the site sits in. The ‘stone’ of Solid State Change is out of context, sitting in an improbable location, not part of the building, not part of the landscape. The work sits in mown lawn, a suburban look, not like a ledge sitting in a hay field, surrounded by tall grasses as the farmer is unwilling to risk sharp cutter knives near piles of rock.
Our new Year 7 students received their first taste of TREE in the Senior School early in Term 1 with their science project on chickens. The students hatched out day old chicks to begin the term and then separated them into 3 different groups of 4 chickens each. The different groups were fed a different proportion of protein within their crumble mix; 16%, 18% and 22% respectively. Each day a science class went down to the agriculture plot to weigh the chickens and recorded the results. This lasted for 3-weeks by which time the students had trouble keeping the ‘flighty’ chickens in the container for weighing! The project was a fantastic opportunity to discuss variables, explore the scientific method and to see experimental science first-hand. The students collated some brilliant results indicating that protein concentration has a dramatic effect on the growth rate of chickens.