text
stringlengths
57
100k
There's a surprising twist to Regina Willoughby's last season with Columbia City Ballet: It's also her 18-year-old daughter Melina's first season with the company. Regina, 40, will retire from the stage in March, just as her daughter starts her own career as a trainee. But for this one season, they're sharing the stage together. Performing Side-By-Side In The Nutcracker Regina and Melina are not only dancing in the same Nutcracker this month, they're onstage at the same time: Regina is doing Snow Queen, while Melina is in the snow corps, and they're both in the Arabian divertissement. "It's very surreal to be dancing it together," says Regina. "I don't know that I ever thought Melina would take ballet this far." Left: Regina and Melina with another company member post-snow scene in 2003. Right: The pair post-snow scene in 2017 (in the same theater) Keep reading at dancemagazine.com.
The New York City Ballet Board of Directors announced on Saturday the interim team that has been appointed to run the artistic side of the company during ballet master in chief Peter Martins' leave of absence. Martins requested a temporary leave from both NYCB and the School of American Ballet last Thursday while the company undergoes an internal investigation into the sexual harassment accusations aimed at him. The four-person group is made up of members of the company's current artistic staff, led by ballet master and former principal dancer Jonathan Stafford. Joining Stafford are NYCB resident choreographer and soloist Justin Peck and ballet masters Craig Hall and Rebecca Krohn, both former dancers with the company. While the members of this group haven't had much leadership experience, their close familiarity with the company (Krohn left the stage for her new role just two months ago) should help to ease the dancers' transition. The team will be responsible for the day-to-day artistic needs of the company including scheduling, casting and conducting rehearsals. While there's no word yet on the length of their tenure, we'll continue to keep you updated as the story surrounding Martins unfolds.
The Philadelphia Eagles and the New England Patriots aren't the only teams bringing Super Bowl entertainment this week. To celebrate game day (and cheer on their region's respective teams), the dancers of Pennsylvania Ballet and Boston Ballet took a break from their usual rehearsals to perform some Super Bowl-themed choreography. Dressed in their Eagles green, the PAB dancers performed a fast-paced routine full of fouetté turns, sky-high jumps and some swan arms (because they're known as the birds, get it?). But Boston Ballet also decided to get in on the fun—with five Super Bowl wins, they're used to seeing their team in the big game. Sharing their own video on Facebook, which stars principal Paul Craig and soloist Derek Dunn, Boston Ballet threw in a few Balanchine tricks thanks to some props from Prodigal Son. This is officially our new favorite way to get in on the football fun.
Looking for your next audition shoe? Shot at and in collaboration with Broadway Dance Center, Só Dança has launched a new collection of shoes working with some pretty famous faces of the musical theater world! Offered in two different styles and either 2.5" or 3" heels, top industry professionals are loving how versatile and supportive these shoes are! Pro tip: The heel is centered under the body so you can feel confident and stable!
New York City Ballet principal dancer Rebecca Krohn will take her final bow with the company this Saturday night. Krohn joined NYCB as an apprentice in the fall of 1998 and slowly rose through the ranks, becoming a principal in 2012. Though Krohn is best known for her flawless execution of classic Balanchine leotard ballets, her repertoire is vast, spanning Jerome Robbins to Justin Peck. After dancing Stravinsky Violin Concerto with Amar Ramasar on Saturday, Krohn will return to the NYCB studios on Monday in a new role: ballet master. We had the chance to talk to the thoughtful and eloquent dancer about her time with the company and goals for the future. Was New York City Ballet always your dream company? As soon as I knew I wanted to be a professional dancer, I knew that I wanted to be in New York City Ballet. I moved to New York when I was 14 to train at the School of American Ballet, and I got my apprenticeship with the company when I was 17, so it was really a dream come true. Krohn and Adrian Danchig-Waring in Balanchine's Stravinsky Violin Concerto. Video Courtesy NYCB. What have been your favorite ballets or roles to dance? Balanchine's Stravinsky Violin Concerto, which I'll dance for my final show, has always been a favorite, as well as Balanchine's Movements for Piano and Orchestra and Agon. Also Robbins' Dances at a Gathering... there are so many, it's hard to choose! I've always really loved the Balanchine black and white ballets, and there are some Robbins ballets that are always so fulfilling. Can you think of a favorite moment with the company? After almost 20 years there are countless things. In general I would say the time that I've had onstage with some of my friends and dancing partners has been so special. It's one thing to be a friend with someone and another to also share the stage with them. There's just an amazing sense of trust and spontaneity; I feel so connected when I'm out there. That's something I'll never forget. What's the main way that your experience in the company has changed over the years? As I was getting older the company all of a sudden started to seem younger and younger. When I became a soloist and especially a principal my relationship with the corps de ballet dancers shifted. I wanted to be someone that the young dancers could look up to; I wanted to reach out and connect to them more, and to offer support and advice. Krohn and Amara Ramasar in Balanchine's "Movements for Piano and Orchestra." Photo by Paul Kolnik, Courtesy NYCB. Did you always know that you wanted to stay on with the company? It had been in the back of my mind for a number of years, but I didn't really address it formally until a year ago. I spoke to Peter (Martins), just to kind of let him know what I had been thinking. I wanted to hear how he felt about it, which was actually a little nerve-wracking, but he thought it was a great idea. What are you most looking forward to in your new role? I'd like to nurture them and their talents; I'm always amazed to see how talented everyone is. The ballets that we have in our repertoire are so amazing—it's a great honor to be able to carry them on with the new dancers for the future. Krohn with Robert Fairchild in Justin Peck's Everywhere We Go. Video Courtesy NYCB. Is there someone who's teaching style or mentorship style you'd most like to emulate? There are a couple of ballet masters that I've connected to. I'm very close to Karin von Aroldingen. Her undying passion for these pieces is incredibly inspiring. Susan Hendl has also been an inspiration. She has a wonderful talent of drawing out everyone's unique qualities and femininity. What parts of your life outside of ballet do you most look forward to cultivating now that you'll have more time on your hands? I'm looking forward to having more time to enjoy museums in the city. While I was dancing I didn't want to be up on my legs all day on my days off. I won't have to worry about that so much now, and I can spend my day off roaming around and being inspired. I also love to cook, so I'll get to cook a lot more and hopefully host more dinner parties. Krohn and Company in Balanchine's "Serenade." Photo by Paul Kolnik, Courtesy NYCB. Do you have a piece of advice for young dancers who are just starting out? What's so special about ballet is the discipline that it instills. It's important for young dancers to really understand that that is what's taught to you in ballet class every day. It's an invaluable quality for a person to have, whether they continue to dance or end up doing other things. My other piece of advice is that you have to treat each day as a new start. Some days you might not feel good about yourself, or things in your body might not be working well—every day is different. But you have to start fresh, be positive and move forward.
In 2014 the dance world was surprised when longtime Pennsylvania Ballet artistic director Roy Kaiser stepped down. It was announced yesterday that Kaiser will be rising to the helm again as the Las Vegas-based Nevada Ballet Theatre's new artistic director, replacing James Canfield. Kaiser will be the fourth artistic director in NBT's 46 year history. The company will be gaining a highly experienced leader. Following his rise through the ranks to principal dancer at Pennsylvania Ballet, Kaiser worked as a ballet master and eventually took the reigns as the company's artistic director in 1995. Pennsylvania Ballet added 90 new ballets and 35 world premieres to their repertoire under his leadership. Roy Kaiser with Pennsylvania Ballet Dancers. Photo by Alexander Iziliaev, Courtesy Nevada Ballet Theatre. NDT is the largest professional ballet company and dance academy in the state, with 35 company dancers and a vibrant school. Kaiser will hit the ground running with the company's 10th Anniversary Celebration of A Choreographer's Showcase, its collaboration with Cirque du Soleil, opening this weekend. In November the company will reference Kaiser's Balanchine roots with a program titled Classic Americana featuring Serenade and Western Symphony, as well Paul Taylor's Company B.
In one of 60 spacious dance studios at the Beijing Dance Academy, Pei Yu Meng practices a tricky step from Jorma Elo's Over Glow. She's standing among other students, but they all work alone, with the help of teachers calling out corrections from the front of the room. On top of her strong classical foundation and clean balletic lines, Pei Yu's slithery coordination and laser-sharp focus give her dancing a polished gleam. Once she's mastered the pirouette she's been struggling with, she repeats the step over and over until the clock reaches 12 pm for lunch. Here, every moment is a chance to approach perfection. Pei Yu came to the school at age 10 from Hebei, a province near Beijing. Now 20, and in her third year of BDA's professional program, she is an example of a new kind of Chinese ballet student. Founded in 1954 by the country's communist government, BDA is a fully state-funded professional training school with close to 3,000 students and 275 full-time teachers over four departments (ballet, classical Chinese dance, social dance and musical theater). It offers degrees in performance, choreography and more. BDA's ballet program has long been known for fostering pristine Russian-style talent. But since 2011, the school has made major efforts to broaden ballet students' knowledge of Chinese dance traditions and the works of Western contemporary ballet choreographers. Pointe went inside this prestigious academy to see how BDA trains its dancers. Getting In BDA's admission process is extremely competitive, despite the school's large numbers. The ballet program is made up of a lower division, lasting seven years, and a four-year professional bachelor program. The professional division's admission procedure is extensive. Every year, hundreds of students ages 16 to 18 audition in Beijing over the course of two days, presenting classical and contemporary variations and improvisational work, and taking an academic exam. "We are looking to produce artists with the technical skills to excel in professional companies and the knowledge to work in all jobs in the field of dance," says the ballet department's executive and artistic director and former National Ballet of China principal Zhirui (Regina) Zou. Nearly 100 are currently enrolled in the professional ballet program. Though the school does admit foreign applicants, it does not host international students very often because the academic entrance exam measures Chinese language proficiency (most classes are taught in Chinese). BDA does participate in exchange programs with ballet schools around the world. A Typical Day Students begin their days with an early 8 am technique class. Following the Vaganova method, classes are strict and focus on precise positions and placement. Upper levels are split to keep class size small—around eight students per class. Teachers correct individual students—usually only the best ones, positioned front and center—using the terms "not good" (bù haˇo) and "better" (gèng haˇo), but rarely awarding praise. The day continues with classical Chinese dance, character, contemporary, repertoire and pas de deux, as well as dance history, anatomy, music appreciation and injury prevention. "Classical Chinese dance is a large part of our identity as Chinese ballet dancers," explains Zou. She points out an example from a girls' ballet class, where students circle their heads as if in a reverse renversé during an attitude promenade. "Chinese dance focuses on circular upper-body movements, a unique coordination that complements ballet technique." Rehearsals and classes can end as late as 9 pm. Students live on campus in dormitories; with little free time and all focus placed on their futures, they consider BDA home until graduation. BDA's ballet department in a performance of "La Bayadère." Photo Courtesy BDA. Stage Time Performance is the most important aspect of BDA students' professional development, with annual productions featuring classical ballets, contemporary works and student choreography. Since dancers don't usually audition internationally, these performances are their chance to be discovered—directors from surrounding Chinese companies, including the National Ballet of China, attend in order to scout new talent. As a result, preparation is intense. In a studio rehearsal for La Bayadère, Act II, no understudies are present, and any imperfection is pointed out by one of four coaches at the front of the room. All lines, heads, arms and feet are perfectly placed. Although Pei Yu sparkles in her variation, the other dancers are similarly strong and dedicated. Students run the piece twice for stamina. Between run-throughs, each fastidiously practices difficult sections, never satisfied with the results. Dancers approach more contemporary movement with a mature coordination mirroring many professional dancers. Recent performances have included works by Paul Taylor, Jorma Elo and Christopher Wheeldon; students often get to work with the choreographers directly. Pei Yu learned Over Glow from Elo himself. "He showed us how to handle rhythm with the whole body," she says. "Ballet has so many rules, but contemporary ballet makes me feel excited and free." Sun Jie, a coach and men's teacher at BDA since 2008, explains how introducing works from Western choreographers has broadened the overall abilities of Chinese ballet dancers. "When we started to teach new works at BDA in 2011, students struggled to move freely or adapt to new movement," he says. "But learning these styles over time has opened dancers' eyes to new possibilities." Life After Graduation BDA students enter professional life somewhat older than in the West, with graduates ranging from 20 to 22 years old. Only the most promising students receive company contracts, but others accept teaching and other dance-related posts at BDA and surrounding dance schools and institutions. Although many have won awards at international competitions, the school does not actively focus on competing. "To prepare competitors, so much attention must be placed on individual students, whereas performances encourage the entire student body," says Zou. Even so, competitions have given these students international exposure, though only a small percentage of graduates accept jobs abroad. BDA alumni in American companies include San Francisco Ballet soloists Wei Wang and Wanting Zhao and ABT corps members Zhiyao Zhang and Xuelan Lu. With graduation in sight, Pei Yu shares the same dream as many of her classmates: a spot with the National Ballet of China. P A men's classical Chinese dance class. Photo by Lucy Van Cleef. Beijing's Bournonville Connection Exposure to the Danish Bournonville style is a special component of the diverse ballet education that BDA offers. Former Royal Danish Ballet artistic director Frank Andersen has been a guest teacher at the school since 2002, and was awarded a professorship in 2012. So far, BDA students have performed in Bournonville ballets including Napoli's Act III, La Ventana and Conservatory, and some danced in the National Ballet of China's 2015 production of La Sylphide. Thanks to almost 23 years of Andersen's work in Beijing, Bournonville has found a second home in China. Though there are Bournonville technique classes when time allows, Andersen imparts those lessons through the repertoire and Danish mime. "The most important part is making the mime believable," Andersen explains. "Young dancers often have the urge to overact. If I can't describe what I want with words, I have to show them." He holds his hands towards his chest, indicating the sign for "I." "Showing can be more effective than telling. That's the beauty of Bournonville's work. It's so honest."
Looking for your next audition shoe? Shot at and in collaboration with Broadway Dance Center, Só Dança has launched a new collection of shoes working with some pretty famous faces of the musical theater world! Offered in two different styles and either 2.5" or 3" heels, top industry professionals are loving how versatile and supportive these shoes are! Pro tip: The heel is centered under the body so you can feel confident and stable!
Though American Ballet Theatre principals James Whiteside and Isabella Boylston have long displayed their envy-worthy friendship on Instagram, this week the Cindies (their nickname for each other) offered viewers an even deeper glimpse into their world. While on tour with ABT at the Kennedy Center, the duo sat down in front of the camera to answer some questions from their fans via Facebook Live. Starbucks in hand, they discuss their mutual love of food (particularly pasta and Japanese curry), the story behind the Cindy nickname and what it's like picking up contemporary choreography versus classical. Boylston also delves into her experience guesting with the Paris Opéra Ballet, her dream of choreographing an avant-garde ballet on Whiteside to a Carly Rae Jepsen song and best and worst Kennedy Center memories (like the time she fell onstage while doing fouettés at the end of La Bayadère's first act). Whiteside, on the other hand, imitates a unicorn, talks about preparing for roles and creates a new middle name for Boylston. The twosome also offer heartfelt advice for aspiring professional dancers. Check out the highlights in this video below; for the full 24-minute version, click here.
What's going on in ballet this week? We've pulled together some highlights. The Bolshoi Premiere of John Neumeier's Anna Karenina Last July Hamburg Ballet presented the world premiere of John Neumeier's Anna Karenina, a modern adaptation on Leo Tolstoy's famous novel. Hamburg Ballet coproduced the full-length ballet with the National Ballet of Canada and the Bolshoi, the latter of which will premiere the work March 23 (NBoC will have its premiere in November). The production will feature Bolshoi star Svetlana Zakharova in the title role. This is especially fitting as Neumeier's initial inspiration for the ballet came from Zakharova while they were working together on his Lady of the Camellias. The following video delves into what makes this production stand out. World Premieres at Richmond Ballet and Ballet Arizona Richmond Ballet's New Works Festival March 20-25 features pieces by four choreographers who have never worked with Richmond Ballet before: Francesca Harper, Tom Mattingly, Mariana Oliveira and Bradley Shelver. But there's a twist: each choreographer had only 25 hours with the dancers to create a 10-15 minute ballet. Meanwhile the Phoenix-based company's spring season opens with Today's Masters 2018, March 22-25. The program includes a company premiere by Alejandro Cerrudo and world premieres by Nayon Iovino and artistic director Ib Andersen. Andersen's Pelvis features dance moves and costumes from the 1950s and references to Elvis Presley (pElvis, anyone?) San Francisco Ballet Honors Robbins Mysterious; romantic; witty; electrifying. That's how SFB describes their upcoming tribute to Jerome Robbins, March 20-25. The company is one of dozens of others honoring Robbins this year; last week we covered Cincinnati Ballet and New York Theatre Ballet. SFB is presenting four works celebrating the famed choreographer's career in ballet and Broadway: Fancy Free, Opus 19/The Dreamer, The Cage and Other Dances. Reid and Harriet Designs at the Guggenheim March 25-27, costume design duo Reid Bartelme and Harriet Jung take the stage as part of Guggenheim Works & Process. The partnership is known for creatively intersecting design and dance; last summer they created a swimwear line based on Justin Peck costumes, and in November they presented their design-driven Nutcracker. For this week's show they collaborated with a long list of choreographers including Lar Lubovitch and Pam Tanowitz to create short works featuring their costumes. A number of dancers including New York City Ballet principal Russell Janzen will be acting as moving models.
This week marks the world premiere of Frame by Frame, The National Ballet of Canada's new full length ballet based on the life and work of innovative filmmaker Norman McLaren. While those outside of the cinephile community might not be familiar with McLaren's work, he is commonly credited with advancing film techniques including animation and pixilation in the 20th century—he died in 1987. The Canadian artist's many accolades include a 1952 Oscar for Best Documentary for his abstract short film Neighbours (watch the whole thing here). Later in life, McLaren became interested in ballet, and made a number of dance films including his renowned 1968 Pas de deux. NBoC's new work, titled Frame by Frame, will run June 1-10 in Toronto. The ballet combines vignettes of McLaren's life with movement quotes from his films and real time recreations of his technological advances. It was created in collaboration by NBoC principal dancer and choreographic associate Guillaume Côté and film and stage director Robert Lepage, who is making his NBoC debut. Pointe touched base with Côté on how this interdisciplinary project came together. Where did the initial spark for this idea come from? Robert and I have been wanting to work together for years. I approached him about doing a different project a long time ago, and he said "well, maybe that's not the correct project." He came to me about four years after that and said "I think I finally have something I'd like to work on with you," so then I approached the National Ballet. Were you familiar with McLaren's work before this project? I wasn't. Robert had just worked on a big McLaren documentary, and he got me to come see it and I realized that it was all about movement, and that this animator was basically a choreographer himself, a choreographer of space and time. There was all this material to work with and he'd made a number of iconic dance films, so it seemed like a no brainer. So I started my research and kept finding out more surprising fun facts about McLaren's passion for dance, like that he met Guy Glover, the man that he was in a relationship with for 50 years or so, in the audience at Covent Garden while watching a ballet. Guy was a curator of a dance festival in Canada. Robert Lepage and Guillaume Côté in rehearsal for "Frame By Frame." Photo by Elias Djemil-Matassov, Courtesy NBoC. What was the research process like? Robert had just finished his documentary and has a really deep understanding of this kind of Canadian culture. He'd already gotten incredible footage from the National Film Board of Canada of McLaren behind-the-scenes. I watched a tremendous amount of film, and I read a lot about him and his collaborators, and even met a few who are still alive. The research was truly enriching, because I realized how wonderful of a person he was as well, which dictates how we share his personal life onstage. What was the timeline of the project? We had our first workshop four years ago. Since then we've been doing five-day workshops once a year in Robert's studio, Ex Machina, where he has a multi media team. They would put together projections and props for us to experiment with. Robert would give me some homework, and I'd take it on myself to create some impressionist sections. Sometimes we decided that they were great, and sometimes we decided that they weren't. It was this really collaborative way of getting things started because I was able to present dance first, and then we were able to add technology to it, as opposed to the technology taking over and stealing from the dance. Artists of the Ballet in rehearsal for "Frame by Frame." Photo by David Leclerc, Courtesy NBoC. How does the piece balance a narrative retelling of McLaren's life with reproductions of his works? I would say that the vignettes of his life are one third, and then the technologies that he pioneered and making abstract dance interpretations based on those technologies like stop motion or body painting and body projection are the second third. And then the last third is basically just direct quotes from his films and bringing them back to life. Like with Pas de deux, the effects in that film took him months to make, but now thanks to technology we can duplicate it live. It's not a story ballet per say, but there is a story from beginning to end. What was the process like of taking movement quotes from McLaren's films?
The Broadway revival of Richard Rogers and Oscar Hammerstein's Carousel opened last week, and while it stars luminaries from the worlds of musical theater (Joshua Henry, Jessie Mueller) and opera (soprano Renée Fleming), it also stars choreography by one of ballet's own heavy hitters: New York City Ballet soloist and resident choreographer Justin Peck, who shares top billing with the musical's director, Jack O'Brien. There are more than a few familiar faces onstage, too. NYCB principal Amar Ramasar is cast as ne'er-do-well sailor Jigger Craigin, while NYCB soloist Brittany Pollack plays Louise, who dances Act II's famous "dream ballet." American Ballet Theatre soloist Craig Salstein took a leave of absence from the company to serve as the show's dance captain and to perform in the ensemble, where he's joined by recent Miami City Ballet transplants Adriana Pierce and Andrei Chagas (a Pointe 2015 Star of the Corps). Several other veteran Broadway ballet dancers round out the cast, including An American in Paris alumni Leigh-Ann Esty (Miami City Ballet), David Prottas (NYCB) and Laura Feig (Atlanta Ballet, BalletX), and Come Fly Away's Amy Ruggiero (American Repertory Ballet, Ballet Austin, Twyla Tharp). "CBS Sunday Morning" recently ran a lengthy profile on Peck, who at age 30 has already established himself as one of the world's most in-demand choreographers. In addition to shedding light on his efforts to make ballet more accessible to modern audiences ("I don't want ballet to feel like an elitist art form"), Peck answers the question on everyone's mind in the post Peter Martins era: whether he's interested in becoming NYCB's next director. The profile also includes fun behind-the-scenes Carousel footage—check it out above.
This spring, The Joffrey Ballet will present the North American premiere of Alexander Ekman's Midsummer Night's Dream. The Swedish choreographer is best known for his absurdist and cutting-edge productions. "This is not Shakespeare's Midsummer," says Joffrey Ballet artistic director Ashley Wheater. The title of Ekman's version, which premiered with the Royal Swedish Ballet in 2015, refers not to Shakespeare but to Midsummer, the traditional Scandinavian summer solstice festival. The piece follows a young man through a day of revelry followed by a nightmare, blurring the line with reality. "It's a kind of otherworldly dream," says Wheater. Bringing Ekman's production to life is no small feat; the piece utilizes the entire Joffrey company. "I can't think of another performance that has so many props," says Wheater, listing giant bales of hay, long banquet tables, umbrellas, beach chairs and more. The piece features a commissioned score by Swedish composer Mikael Karlsson, which will be performed onstage by singer Anna von Hausswolff. "She is very much a part of the performance; she's kind of the narrator," says Wheater. Dancers also contribute to the narration with spoken text, including imagery of young love and a dose of humor. The Royal Swedish Ballet in Alexander Ekman's "Midsummer Night's Dream." Photo by Hans Nilsson, Courtesy Joffrey Ballet. This will be the fourth work by Ekman that The Joffrey has performed. "I think it says something that Alex trusts us to bring the work to its full realization," says Wheater. "It's not just a few ballet steps here and there; he asks you to fully engage with yourself, not only as a dancer but as an actor and a person." Ekman's Midsummer will run April 25–May 6 at the Auditorium Theatre in Chicago.
The Bay County Sheriff’s Office has arrested four men from Atlanta after they allegedly hired local transients to cash fraudulent checks for them using stolen personal information. Three transients have also been arrested for their participation. BCSO Criminal Investigations opened an investigation after deputies initially responded to a call from a local transient who claimed he had been robbed by two men. Investigators now believe that four black males came from Atlanta to the Destin area on October 1, 2017, and rented two vehicles. On October 3, 2017, they came to Bay County and two of them─Johnathan Johnson and Germarco Johnson─ picked up transient Scott Allen McNeight, age 44. McNeight was recently released from prison on October 1, 2017. They first asked McNeight if he had a photo ID, which he did. The men took a photo of the ID and sent it to the two other men from Atlanta. A fraudulent check was created using McNeight’s information and stolen routing numbers and checking account numbers. The other two suspects from Atlanta─Clarence Suggs and Reginald Hughes─ picked up transient James Dean Riles, age 58, for the same purpose. The two transients were first taken to a thrift store in Springfield. The Atlanta men purchased better clothing for the transients to wear when they entered banks to cash the fraudulent checks. Once dressed, McNeight was taken to a bank on Panama City Beach and was able to cash a 1,600 check. Although the agreement was for the transients to get about $80 for their part in the scam, McNeight took half of the money and fled on foot. Demarko Johnson and Johnathan Johnson were able to find McNeight at a gas station at Magnolia Beach Road and Thomas Drive. The two men entered the gas station after McNeight, one of them carrying a tire iron. Johnathon Johnson put McNeight in a choke hold, and they threatened McNeight with the tire iron, and took the money, his wallet, and his cell phone. That was when McNeight called the BCSO to report the robbery. Although initially not forthcoming about the true circumstances surrounding the robbery, McNeight eventually told investigators how he got the money. James Dean Riles, the other transient, was unable to cash his fraudulent check at the first bank, and was taken by Clarence Suggs and Reginald Hughes to a second bank where he was successful. He was paid $80 and was taken by the two men to Millville and left. Riles then called the Panama City Police Department to file a complaint about what he had done. Riles had important tag information on one of the vehicles. Using the tag information, investigators were able to learn the two vehicles were rentals and eventually identified the four men from Atlanta. A BOLO was put out to local law enforcement on the vehicles. One was located at a business on East Avenue and contact was made with Clarence Suggs, age 27, and Reginald Hughes, age 26. They were arrested. Suggs and Hughes were charged with Counterfeiting of an Instrument with Intent to Defraud (two counts), Principal/Accessory to Uttering a False Bank Bill (two counts), and Larceny $300 or more but less than $5000 (two counts). A BCSO investigator spotted the second vehicle in a grocery store parking lot in Lynn Haven. He watched as a man left a bank adjacent to the parking lot, and got into the vehicle with two black males. A traffic stop was done and Johnathan Johnson, age 22, and Germarco Johnson, age 27, cousins, were arrested. Johnathan Johnson was charged with Robbery with a Weapon, Counterfeiting of an Instrument with Intent to Defraud (two counts), Principal/Accessory to Uttering a False Bank Bill (two counts), Larceny $300 or more but less than $5000 (two counts), and Violation of Probation for Financial Identity Fraud. Germarco Johnson was charged with Robbery with a Firearm, Counterfeiting of an Instrument with Intent to Defraud (two counts), Principal/Accessory to Uttering a False Bank Bill (two counts), and Larceny $300 or more but less than $5000 (two counts). The white male with them was identified as Charles Edward Sinard, age 39, a transient. He was also arrested and charged with Uttering a False Bank Bill, Larceny $300 or more but less than $5000, and Criminal Mischief, $1000 or more. The other two transients involved in this case, Riles and McNeight, were also arrested and charged with Uttering a False Bank Bill and Larceny $300 or more but less than $5000. During a search warrant on the two vehicles two printers, blank checks, cash, and a computer with check-making software were located and seized. All seven men were taken to the Bay County Jail and booked. 34 total views, 34 views today Share Us
The Bay County Sheriff’s Office announced the arrest of a local man on charges he committed sexual battery on a child. The victim confided in a family member that Shanard Cameron had molested her. Cameron, age 25, allegedly took the victim without parental consent to a festival and had sex with her. The victim was interviewed at Gulf Coast Children’s Advocacy Center. Contact was made with Shanard Cameron and he was subsequently arrested and charged with Sexual Battery on a Child Under the Age of Twelve. 36 total views, 36 views today Share Us
One of the most challenging tasks on an admin's list is the management of projects and tickets. This can be especially overwhelming when you have a larger IT department and a staff working on numerous projects at once. But the management of projects and ticket issues doesn't just fall on the heads of large companies. Even if you're a one-person shop consultancy, it can be easy to drown in a quagmire of projects. Thankfully, there are a lot of tools available to help you with that. If you happen to be a fan of open source (and who isn't?), those tools are not only readily available, they are free and (generally speaking) easy to set up. I want to walk you through the installation of one such tool: Trac. Trac can be used as a wiki, a project management system, and for tracking bugs in software development. I'll be demonstrating the installation on a Ubuntu Server 16.04. And so, let's get to it. Installation The installation isn't terribly challenging, but does require a bit of typing. Log into your Ubuntu server and let's take care of the dependencies. If your Ubuntu Server platform is without Apache, install it with the command: sudo apt-get install apache2 -y Once that completes, install Trac with the command: sudo apt-get install trac libapache2-mod-wsgi -y Next, the auth_digest module must be enabled with the command: sudo a2enmod auth_digest Now we create the necessary document root for Trac (and give it the correct permissions) with the following commands: sudo mkdir /var/lib/trac sudo mkdir -p /var/www/html/trac sudo chown www-data:www-data /var/www/html/trac For our next trick, we create a Trac project directory (we'll call it test) with the command: sudo trac-admin /var/lib/trac/test initenv test sqlite:db/trac.db Time to give that new directory the proper permissions. This is done by issuing the following commands: sudo trac-admin /var/lib/trac/test deploy /var/www/html/trac/test sudo chown -R www-data:www-data /var/lib/trac/test sudo chown -R www-data:www-data /var/www/html/trac/test Finally, we create both an admin user and a standard user with the commands: sudo htdigest -c /var/lib/trac/test/.htdigest "test" admin sudo htdigest /var/lib/trac/test/.htdigest "test" USER Where USER is the name of the user you prefere. After both of the above commands, you'll be prompted to type (and confirm) a password. Remember these passwords. Configure Apache Create an Apache .conf file for Trac with the following command: sudo nano /etc/apache2/sites-available/trac.conf Add the following content to the new file: WSGIScriptAlias /trac/test /var/www/html/trac/test/cgi-bin/trac.wsgi AuthType Digest AuthName "test" AuthUserFile /var/lib/trac/test/.htdigest Require valid-user Save and close that file. Enable our new site (and restart Apache) with the following commands: sudo a2ensite trac sudo systemctl restart apache2 Accessing Trac Open a web browser and point it to http://SERVER_IP/trac/test (where SERVER_IP is the IP address of your server). You will be prompted for login credentials. Log in with the user admin and the password you set when you created admin user earlier. You can also login with the non-admin user you created. There is actually no difference between the users (which could be a deal-breaker for some). One thing to note: If you need more users, you'll create them from the command line (in similar fashion as you did above). For every user you need to add to Trac, issue the command: sudo htdigest /var/lib/trac/test/.htdigest "test" USER Where USER is the username. The above command will add the user to the project test. If you need to create more projects, you must go back through the process of creating a new project directory and then add users to it. Once you've successfully authenticated, you'll be presented with the Trac web interface, were you can begin working (Figure A). Figure A You can now go to Preferences and configure your installation of Trac. Once you've completed that, you can begin creating new tickets. If you get the (please configure the [header_logo] section in trac.ini) error, you can configure this with the command: sudo nano /var/lib/trac/test/conf/trac.ini In that file, you'll see the section: [header_logo] alt = (please configure the [header_logo] section in trac.ini) height = -1 link = src = site/your_project_logo.png width = -1 That is where you'll configure a header logo to suit your company. SEE: IT project management: 10 ways to stay under budget (free PDF) (TechRepublic) Congratulations You now have your Trac system up and running. Although you might find systems with more features (and a more powerful configuration system), Trac is very simple to set up and use. If you're looking for a basic ticketing system that can serve as a project management tool, Trac just might fit the bill. Automatically sign up for TechRepublic's Open Source Weekly Newsletter for more hot tips and tricks. Subscribe Also see
Building a slide deck, pitch, or presentation? Here are the big takeaways: A new class from Amazon, the AWS Certified Security - Specialty Exam, will validate a cloud pro's ability to secure the AWS platform. Cloud skills are in high demand, but added security expertise could help set job seekers apart. A new professional exam from Amazon Web Services (AWS) will help cloud experts validate their ability to secure data on the platform, according to a Monday blog post. The AWS Certified Security - Specialty Exam is now available to those who hold either an Associate or Cloud Practitioner certification from AWS. As noted in the post, AWS recommends that those taking the exam have at least five years experience working in IT security and two years experience working on AWS workloads. The exam will deal with such topics as "incident response, logging and monitoring, infrastructure security, identity and access management, and data protection," the post said. The exam consists of 65 multiple choice questions and will likely take 170 minutes to complete. The registration fee is $300. SEE: Cloud computing policy (Tech Pro Research) According to the post, once the exam is complete, the test taker will have a working knowledge or understanding of the following: Specialized data classifications on AWS AWS data protection mechanisms Data encryption methods and AWS mechanisms to implement them Secure Internet protocols and how to implement them on AWS AWS security services and features Additionally, the post noted that those who pass the exam will have a competency in working with AWS security services in production, an understanding of security operations, and the "ability to make tradeoff decisions with regard to cost, security, and deployment complexity given a set of application requirements." For those looking to prepare for the exam, the post recommends going to the AWS Training website and working on the Advanced Architecting on AWS and Security Operations on AWS trainings. Additional security trainings on AWS Security Fundamentals, Authentication and Authorization with AWS Identity and Access Management, AWS Shared Responsibility Model, and AWS Well-Architected Training are also helpful. Additionally, compliance and security whitepapers will also help prepare would-be test takers. Stay informed, click here to subscribe to the TechRepublic Cloud Insights newsletter. Subscribe Also see
Git is the largest revision control and collaboration system available for development. Git has replaced larger, more costly systems across the globe and has become the de facto standard tool for coders. But for some companies, small or large, housing code on a third-party cloud storage service might be a no-go. If that's the case, the only solution is in-house. For some, that means setting up a server and running a Git repository for the housing of proprietary or open source code. However, for some companies (especially those on the smaller side), having the resources (and time) to set up a server dedicated for Git storage may not be an option. That being the case, what do you do? Fortunately there's a solution, one that's incredibly simple. Said solution is Gitstorage, an easy-to-deploy appliance dedicated to housing your Git repositories. Each appliance is a single board computer (based on the Raspberry Pi). The device is smaller than a credit card, has no moving parts, generates no heat, is wall-mountable, is powered by a standard USB (or included mini USB), and offers a standard ethernet connection. The full specs are: Dimensions - 3.44" × 2.93" × 1.28" (87.4 mm × 74.3 mm × 32.5 mm) Weight - 2.08 oz (59 g) Wall mount - 4 screws Ambient temperature - 32 °F - 104 °F (0 °C - 40 °C) Memory capacity - 16 GB (GS-16) 64 GB (GS-64) Storage for git repos - 10.6 GB (GS-16) 58.6 GB (GS-64) Certifications - CE, FCC Processor - H2 quadcore Cortex-A7 with 512 MB RAM Power supply - Standard USB Connectors - 1 × 10/100 MBit/s Ethernet, USB-A, Power (USB Micro-B) Web interface languages - English (US), French, German Price (MSRP) - $399 USD (GS-16) $499 USD (GS-64) But how well does the Gitstorage appliance work? Is it really that easy to deploy? Let's deploy one and find out. SEE: How to build a successful developer career (free PDF) (TechRepublic) Setup The setup of the Gitstorage is remarkably simple: Unpack the box. Plug the device into your network (you'll need a Cat5 cable). Connect the power cable. Wait 60 seconds. At this point, things get a bit complicated. According to the directions, you should then be able to point a browser to http://gitst.net and the Gitstorage interface will appear. I tried that on both a Linux desktop and a MacBook Pro. Neither machine could find the device. In fact, if I attempted to ping the gitst.net address, I received a WAN IP address that didn't respond. The only way I was able to reach my Gitstorage device was to log into my router, look for gitstorage among the connected devices, and find out the IP address of the device. Once I had that IP address, I could point my browser to that address and login with user root and password password. At that point, the setup wizard is presented (Figure A). Figure A The steps to the setup wizard are: Language selection EULA Name the device Device root CA creation or import (optional) Encryption password Admin setup (email/password) Dropbox setup (optional) Email setup (optional) Once I completed the wizard, trouble in paradise arose. During the first round, the final screen was blank. After a reboot, I had to walk through the wizard again. This time around the final screen appeared, the All set link didn't work. So I returned to the IP address and was presented with a login screen. I attempted to use the admin email/password I'd setup during the wizard, but that wouldn't work. I then attempted root/password ... again to no avail. After another reboot (unplug, wait a few seconds, plug back in), I was (once again) sent to the setup wizard (only this time, half-way through). Once again, the final screen links wouldn't work. Fortunately, I was sent two devices, so I unplugged the first (a GS-16) and plugged in the second (a GS-64). This time around, everything went smoothly and I was able to log into the Gitstorage interface (Figure B). Figure B Usage From the main interface, your first task is to create users. Click on the Users button and add the necessary information for a new user (Figure C). Figure C You can now create a new repository. However, new repositories can only be created by the Root user. This is a problem. Why? Remember that admin user created during setup? I was unable to login with that user. So the only user with root privileges is root and the password is, well, not even remotely secure. Changing that password isn't nearly as intuitive as you might think (at least not from an admin perspective). Instead of the root user password change option being in the Settings sections, you must click on the Root user button in the upper right corner. From the popup menu (Figure D), click Account. Figure D In the resulting window, click Password. When prompted, type (and verify) the new password for the root user. Log out and log back in with your new credentials. Now click on the Repositories entry in the left navigation, click the Create button, give the repository a name, and click Submit. Once you've created the repository, click on the Settings entry for it and then click the Add user button, so you can add users to the repository (otherwise the root user will be the only one with access). SEE: 10 Terminal commands to speed your work on the Mac (free PDF) (TechRepublic) Smooth sailing And that's pretty much all there is to setting up a Gitstorage device. Although I did have one hiccup with the first appliance, setting up the second resulted in some pretty smooth sailing for using an in-house Git repository. If you're looking for an incredibly simple solution for code collaboration (and you don't have the resources to setup your own Git server), I highly recommend a Gitstorage device. It's a simple, small, and elegant solution that should serve you well. Automatically sign up for TechRepublic's Cloud Insights Newsletter for more hot tips and tricks. Subscribe Also see
Qualcomm's new Snapdragon XR1 chip, announced via a Tuesday press release, aims to break down the barrier for high-quality virtual reality (VR) and augmented reality (AR) and bring the technologies to lower-end devices. If successful, the XR1 chip could improve technologies found in modern smart glasses, and make VR and AR more affordable to get into for smaller companies. The chip could also help bring more artificial intelligence (AI) functionality into AR as well, the release noted. In its release, Qualcomm called the XR1 an Extended Reality (XR) platform, noting that it will help bring higher quality experiences to mass-produced devices. And the addition of the AI capabilities will provide "better interactivity, power consumption and thermal efficiency," the release said. SEE: Virtual and augmented reality policy (Tech Pro Research) The XR1 features an ARM-based multi-core CPU, a vector processor, a GPU, and a dedicated AI engine for on-board processing. A software layer with dedicated machine learning, connectivity, and security is also part of the platform, the release said. The chip can handle up to 4K definition at 60 frames per second, according to the release. It also supports OpenGL, OpenCL, and Vulkan, and its AI capabilities contribute to computer vision features. Other hallmarks of the XR1 are high-fidelity audio and six-degrees of freedom (6DoF) head tracking and controller capabilities, making it easier to get around in the virtual world. "As technology evolves and consumer demand grows, we envision XR devices playing a wider variety of roles in consumers' and workers' daily lives," Alex Katouzian, senior vice president and general manager of Qualcomm's Mobile Business Unit, said in the release. OEMs like Meta, VIVE, Vuzix, and Picoare are already building on the XR1 platform, the release said. The big takeaways for tech leaders: Qualcomm has unveiled the Snapdragon XR1 chip, which could bring high-quality AR and VR experiences to more users, at a lower cost. The Qualcomm Snapdragon XR1 features an on-board AI engine to boost computer vision capabilities in AR applications. Stay informed, click here to subscribe to the TechRepublic Next Big Thing newsletter. Subscribe Also see
The EU's General Data Protection Regulation (GDPR) went into effect in May, requiring all organizations that handle the data of EU citizens to comply with its provisions regarding collecting and using personal data. However, a majority of companies likely missed the compliance deadline, and many employees remain unaware of the policies needed to keep data safe. "Data privacy is a hot topic with GDPR going into effect," said Dave Rickard, technical director at CIPHER Security. "An awful lot of companies may not think they have exposure to it, but there are lots of variables in that." For example, one online retailer Rickard works with has many customers from the EU, but can't geolocate them from the website. Others don't work with EU citizens, but have data processing and storage facilities there, which are also subject to GDPR. SEE: EU General Data Protection Regulation (GDPR) policy (Tech Pro Research) GDPR will likely influence data privacy policies in other countries, Rickard said. However, cultural differences, particularly between the EU and US, may make this difficult. "In the EU people are very centered on the perspective that 'My name, my social security number, my passport information, everything that is PII about me, belongs to me. It's part of my individuality,'" he said. "Whereas in North America, people have long since taken the perspective instead that data is currency. There are so many business models that are built on it. Data is money." The majority of companies that need to be compliant with GDPR are not yet, Rickard said. "I'd say compliance right now is only at about 35% or 40% at the most," he said. "I think a lot of people are taking a wait and see approach." Some of the bigger players like Facebook, Google, and Amazon are going to be the canaries in the coal mine, Rickard said. "I think that they'll have actions taken on them first, and people are going to wait and see if the actual GDPR penalties play out the way that they've been published." Companies that fail to comply with GDPR will face a penalty of either 4% of their global revenue or €20 million, whichever is greater. Here are five types of policies that companies must ensure they have in place and have trained employees on in the age of GDPR, according to Rickard. 1. Encryption policies Most companies lack policies around data encryption, Rickard said. "Most people who are data owners are unaware of whether their data is encrypted at rest or not," he added. "GDPR is big on encryption at rest." SEE: Encryption policy (Tech Pro Research) 2. Acceptable use policies An acceptable use policy should covers things like what applications are allowed, what web searching and social media habits are appropriate for the business, and the potential threats to brand reputation, Rickard said. 3. Password policies Passwords remain a common digital entry point into an organization for hackers. Even if, in the best case scenario, employees use complex passwords that are changed often and not shared, human error and carelessness can still put a business at risk. "One of the easiest ways to breach a company is to put somebody on the janitorial staff and go looking at desks," Rickard said. "People often have Post-it notes on monitors with passwords on them." 4. Email policies IT should have an email policy in place that hardens systems and can detect spam and viruses, Rickard said. "The kind of information that can be disclosed via email should be spelled out very clearly," he added. 5. Data processing policies Companies need to do data process flow mapping to see what data is being collected, how it's being processed, and who is receiving processed copies, Rickard said. "GDPR closes all those gaps," he added. Employee training is paramount for ensuring these policies are enforced, Rickard said. Raising awareness of the threat landscape and common vulnerabilities can help counteract human error. "Security awareness and training is the cornerstone of any security program," he added. For tips on how to best train employees on cybersecurity practices, click here. Stay up to date on all the latest cybersecurity threats. Click here to subscribe to the TechRepublic Cybersecurity Insider newsletter. Subscribe Also see
Over the past two decades, Java has arguably been the most successful programming language on the planet. Go is cool, Swift is nifty, but old-school Java keeps reinventing itself to power both yesterday's and tomorrow's applications. Depending on how you count, some 14 million Java programmers code today, with many of them paid well to maintain massive enterprise applications (an estimated 80% of enterprise workloads run on Java). Redmonk, in its latest Q1 2018 survey, says Java is the second-most popular language after JavaScript among developers. Not that Java lacks challenges. For example, Java is perhaps the most divisive technology in the industry—a morass of competing vendors with a constipated governance model that excludes much more than it includes. In this way, Java has left obvious gaps and frustration for developers who need a bridge to a cloud-native future. SEE: Job description: Java developer (Tech Pro Research) To ease that frustration, Tuesday the Eclipse Foundation unveiled new directions for Java EE under the recently-named-by-community-vote Jakarta EE Working Group, the successor to Java EE (which remains licensed by Oracle and maintained under the JCP). Java, cloud-friendly? It just might happen. An open Java The one thing everyone agrees about Java is that it's imperfect. And yet there's hope. No longer your grandparent's Cobol, what if a vibrant community embraced Jakarta EE and pushed it much faster than any Java EE before? Under the Eclipse Foundation's guidance, we may finally get the power of open source collaboration to build on the best of Java's two decades of work. Through this new Jakarta EE Working Group process, we should see big Java EE vendors like IBM, Red Hat, and Oracle working within the open processes of the Eclipse Foundation with smaller vendors like Tomitribe and Payara. In this world, there's no single vendor to impose its will on Java. Instead, we may finally get a true code meritocracy where Java communities and individuals function as peers. Instead of a divisive force, Jakarta EE could become a catalyst to join disparate Java communities behind a shared goal. In this case, my bet is on a race to some version of cloud-native implementation for Jakarata EE. You can read all the details on the new Eclipse Foundation governance model online but for me, it's much more interesting to see where the community wants to go. To the credit of the Eclipse Foundation, they surveyed more than 1,800 Java developers worldwide to take the pulse of the Java community. Under Oracle's (or Sun's) control, this sort of community outreach simply didn't happen (though, to its credit, Oracle made the decision to move Java to the Eclipse Foundation's stewardship). Java's cloudy future In the survey, the Eclipse Foundation learned that the three most critical areas that developers want Jakarta EE to prioritize are: Better support for microservices (60%) Native integration with Kubernetes (57%) Faster pace of innovation (47%) SEE: How to build a successful developer career (free PDF) (TechRepublic) Almost half (45%) of the Java developers surveyed are already building microservices, with more (21%) planning to do so within the next 12 months. Add to this the fact that half of these developers currently only run a fifth of their Java applications in the cloud but 30% say they'll run 60% or more of their applications in the cloud, and it's clear how much pent-up demand there is for a more cloud-friendly Java. To get there, roughly a third of the developers surveyed have embraced Kubernetes. This is a cloud-savvy crowd that needs their preferred programming model to keep pace with their ambitions. None of this was a surprise, of course. Java developers aren't living in a cloud-free world. Developers want a framework of tools that helps them be more successful using the Java skills they already have to build next-generation, cloud native applications. With the new Jakarta EE, they just might get their wish. Click here to subscribe to TechRepublic's Cloud Insights newsletter. Subscribe Also see
High demand and low supply of IT professionals may lead to turnover in the new year, a new report found. Some 32% of IT professionals said they plan to search for or take an IT job with a new employer in 2018, according to Spiceworks' 2018 IT Career Outlook. Among those planning to make a job move, 75% said they are seeking a better salary, 70% said they want to advance their skills, and 39% said they want to work for a company that prioritizes IT more than the one they currently work for. Of the 2,163 IT professionals from North America and Europe surveyed, 7% said they plan to start working as a consultant, while 5% said they plan to leave the IT industry altogether. Another 2% reported plans to retire in 2018. Some employees said they expect positive changes from their current employer in the new year: 51% of IT professionals said they expect a raise from their current employer next year, while 21% said they also expect a promotion. However, 24% said they don't expect any career changes or raises in the next year. SEE: IT jobs 2018: Hiring priorities, growth areas, and strategies to fill open roles (Tech Pro Research) Millennials in particular (36%) were more likely to say they were seeking new employment—more than Gen Xers (32%) and baby boomers (23%). Millennial IT professionals are also more likely to leave their current employer to find a better salary, advance their skills, work for a more talented team, and receive better employee perks than older employees. Meanwhile, Gen X IT professionals are more likely to leave their jobs to seek a better work-life balance, while baby boomers are more likely to leave due to burnout. Despite those who plan to leave their jobs, 70% of IT professionals say they are satisfied with their current jobs—though 63% say they believe they are underpaid, the report found. This number is even higher among millennials: 68% of millennial IT workers feel underpaid, compared to 60% of Gen X and 61% of baby boomers. In terms of salary, millennial IT professionals are paid a median income of $50,000 per year, while Gen X IT professionals are paid $65,000, and baby boomers are paid $70,000. These salaries also correlate to years of experience, the report noted. In terms of tech skills needed to be successful in any IT job in the coming year, 81% of IT professionals reported that cybersecurity expertise was critical. Despite understanding how critical this area is, only 19% of IT pros reported having advanced cybersecurity knowledge—potentially putting organizations at risk. This echoes previous research about the dearth of cybersecurity professionals currently available to companies, as well as the need to upskill employees to fill security gaps. SEE: Cheat sheet: How to become a cybersecurity pro About 75% of IT professionals also said that it was critical to have experience in networking, infrastructure hardware, end-user devices, and storage and backup. Of these, 41% said they have advanced networking skills, 50% said they have advanced infrastructure hardware skills, and 79% said they are advanced in supporting and troubleshooting end user devices, including laptops, desktops, and tablets. "Although the majority of IT professionals are satisfied with their jobs, many also believe they should be making more money, and will take the initiative to find an employer who is willing to pay them what they're worth in 2018," Peter Tsai, senior technology analyst at Spiceworks, said in a press release. "Many IT professionals are also motivated to change jobs to advance their skills, particularly in cybersecurity. As data breaches and ransomware outbreaks continue to haunt businesses, IT professionals recognize there is high demand for skilled security professionals now, and in the years to come." Want to use this data in your next business presentation? Feel free to copy and paste these top takeaways into your next slideshow. 32% of IT professionals said they plan to search for or take an IT job with a new employer in 2018. -Spiceworks, 2017 Among IT pros planning to make a job move, 75% said they are seeking a better salary, 70% said they want to advance their skills, and 39% said they want to work for a company that prioritizes IT more. -Spiceworks, 2017 81% of IT professionals reported that cybersecurity expertise was critical in the field, but only 19% said they had advanced cybersecurity skills. -Spiceworks, 2017 Image: iStockphoto/Rawpixel Keep up to date on all of the latest leadership news. Click here to subscribe to the TechRepublic Executive Briefing newsletter. Subscribe Also see
Unfortunately, it seems there's a phishing scheme to go along with virtually every event in life, whether a holiday, a tragedy, or an annual ritual. Tax time is not exempt, so to speak. Whether you work in finance or you support users who do, it's important to be on the lookout this tax season for phishing schemes geared towards obtaining confidential information from unsuspecting individuals. What should users look out for? A common phishing attempt involves compromised or spoofed emails which purport to be from an executive at your organization and are sent either to human resources or finance/payroll employees. The email requests a list of employees and their related W-2 forms. That's not all, however. Another common scam (which can occur throughout the year) involves receiving a phone call from an individual claiming to be from the IRS (caller ID can be spoofed to show this as well) who informs you that you owe money for back taxes and often threatens law enforcement retribution if payment (usually via credit card over the phone) isn't provided. The IRS will never call you on the phone to report you owe them money nor demand money over the phone; they utilize the postal service for such notifications. They also will not engage in threats and are supposed to provide an opportunity for you to work constructively with them or negotiate payment. SEE: IT leader's guide to cyberattack recovery (Tech Pro Research) What standard protection methods should be used? The typical safeguards against phishing can protect you and your employees; establish a policy against requesting confidential information through email, call people directly to verify such requests, arrange for secure transfer of data, and limit the number of employees who possess the authority to access or handle W-2 forms. The IRS also recommends contacting them about any malicious activity. Phishing attempts can be reported to [email protected]. If someone from your company has given out W-2 information, contact [email protected] with a description of what happened and how many employees were affected. Also make sure not to attach any confidential information! If your company is contacted by scammers claiming you owe the IRS money, report it via the IRS Impersonation Scam Reporting webpage. You can also call 800-366-4484. You should also report this to the Federal Trade Commission via the FTC Complaint Assistant on FTC.gov. What else is available to help here? Education and establishing proper procedures can be helpful in minimizing risk, but I also highly recommend using technology to safeguard data as well. While both technology and humans may be prone to failure, technology is harder to fool or take advantage of. With that in mind, data loss prevention (DLP) can be a handy tool in combating phishing gimmicks of this nature. DLP systems examine traffic coming in and out of an organization: emails, instant messages, web access - anything that is sent over the network. These systems can sniff out confidential information such as Social Security numbers and block them from being transmitted. This comes with a potential cost, however; legitimate traffic may end up blocked, such as when employees email tax information to their tax preparers or their own personal accounts. This can pose a challenge for DLP systems (and those responsible for administering them) in separating the wheat from the chaff. The end result is undoubtedly a slew of false positives with frustrated and/or confused employees. SEE: Intrusion detection policy (Tech Pro Research) Another potential solution is user and entity behavior analytics (UEBA). UEBA can determine the likelihood the employee is sending tax information to themselves via their personal email address by analyzing behavioral patterns to determine the legitimacy of specific activities. For example, if an employee named Ray Donovan sends a W-2 form from his corporate email address ([email protected]) to his Gmail address ([email protected]), UEBA can determine that it's highly likely this information is being sent to the same person and will not send a critical alert nor block the transmission. It helps if Ray has a history of sending himself emails of this nature so UEBA can mark that behavior as normal. However, in a genuine phishing scenario where Ray sends a W-2 form to [email protected], an email address he has not previously contacted, UEBA could determine that it's not the same person, analyze further using behavioral comparisons and send alerts or take action as necessary. What about a situation where an employee is emailing confidential information to themselves when they shouldn't (such as someone else's W-2 form, or their own despite company policies prohibiting this)? UEBA can still send alerts which can then result in investigational activity and appropriate discipline as needed, including termination. Making employees aware that this activity is analyzed and monitored can serve as a deterrent and ensure confidential information remains in appropriate hands. For more security tips and news, subscribe to our Cybersecurity Insider newsletter. Subscribe Also see:
Building a slide deck, pitch, or presentation? Here are the big takeaways: HTC has announced its latest VR headset, the VIVE Pro, and has also opened up preorders for the $799 unit. The HTC VIVE Pro offers 78% increase in resolution over the previous VIVE model and is also capable of wireless connectivity using WiGig technology. HTC's new flagship VR headset, the VIVE Pro, is now available for preorder for $799. Included with the new VR headset is a six-month subscription to VIVEPORT, a VR gaming subscription service where subscribers can choose five titles from the service's catalog to rent at any given time. After the trial expires a VIVEPORT subscription will cost $8.99 per month, though purchasing a subscription prior to March 22nd will lock in the current rate of $6.99 per month, which will increase to $8.99 at that point. Along with the release of the VIVE Pro, HTC is reducing the cost of the currently available VIVE headset to $499, a reduction of $100. Purchasing the currently available VIVE includes a two-month subscription to VIVEPORT and a free copy of Fallout 4 VR. The VIVE Pro's capabilities The VIVE Pro will begin shipping on April 5, 2018, and is a considerable upgrade over the previous VIVE model, all without needing much in the way of upgrades to the PC that powers the headset (VIVE units aren't standalone). The VIVE Pro has dual OLED screens with a resolution of 2880x1600, a 78% increase in resolution over the current generation VIVE. It has a 90 Hz refresh rate and a 110 degree field of view and can be used with the current generation of controllers and base stations. SEE: New equipment budget policy (Tech Pro Research) The VIVE Pro VR headset is also WiGig compatible, meaning that users won't need to tether it to a computer or base station, provided they're willing to pay for a separate wireless module, which hasn't been priced or given a release date yet. HTC VIVE US general manager Daniel O'Brien said that the VIVE Pro is designed to deliver "the best quality display and visual experience to the most discerning VR enthusiasts," as well as offering a premium product to drive adoption of VR technology and products. Developers interested in becoming a part of HTC's vision for the future of VR can learn more about building applications for the HTC VIVE Pro at the VIVE developer's portal. Like other VR development platforms, VIVE makes use of Unity and the Unreal Engine and a proprietary SDK for building apps. Learn more about the latest tech trends by subscribing to our Next Big Thing newsletter. Subscribe Also see
Smart locks began appearing on doors when building automation and the Internet of Things (IoT) went mainstream. However, the public's acceptance of smart locks has been less than stellar—initial cost vs. actual benefits are seemingly the primary reason why. Image: Amazon The low adoption of smart locks may soon change if the powers that be at Amazon have their way. The company recently introduced Amazon Key, a remotely-controlled building-access platform—consisting of Amazon's Cloud Cam, a compatible smart lock, and smartphone app (shown to the right)—that allows Amazon-approved delivery personnel to open locked doors and leave deliveries inside the customer's home or office. A slew of additional conveniences not related to package delivery may also help the acceptance of smart locks. That said, the public's interest in smart locks will only improve if the benefits outweigh the costs, and the technology is proven to be physically safe and electronically secure. Security issues have already been reported about Amazon Key. Liam Tung in his ZDNet article Amazon: We're fixing a flaw that leaves Key security camera open to Wi-Fi jamming writes, "A malicious courier could easily freeze the Key's Cloud Cam and roam a customer's house unmonitored." Concerns about smart locks and security were raised way back in 2013. My TechRepublic article High-tech home security products: Who are they really helping? quotes several experts who question the security of smart locks and the technology supporting them. SEE: Internet of Things Policy (Tech Pro Research) AV-TEST put six smart locks' data security through their paces Knowing what experts were saying about smart-lock systems four years ago and the likelihood of smart locks becoming popular, the people at AV-TEST, an independent IT-security testing lab, decided to see if things have improved. The lab's engineers developed a test program and put these six smart locks through their paces: August Smart Lock (USA) Burg-Wachter secuENTRY easy 501 (Germany) Danalock V3 (Denmark) eQ-3 Equiva Bluetooth Smart Lock (Germany) Noke Padlock (USA) Nuki Combo (Austria) Test environments Data security was the first thing considered by the engineers with special emphasis on acquisition, storage, and transmission of data; the following image depicts how they employed Wireshark to capture traffic between the smart lock being tested and the controlling smartphone application. Besides communications, the team examined each system's hardware and software, tested the software-update process, and determined whether the associated smart-lock application had any security issues. Image: AV-TEST The results It seems smart locks have improved considerably in the past four-plus years. From the AV-TEST report: "Convenience does not have to mean less security. This reassuring conclusion can be made following the surprisingly strong results of the smart-lock testing." Concerning the test results, the test engineers offer the following insights. Installation: Despite physical differences, all smart locks evaluated by AV-TEST installed easily—systems manufactured by eQ-3 and Nuki being the easiest. Local communications: All tested smart locks are locally activated via Bluetooth. "As a standard feature, the smart locks use encryption, mostly AES with at least 128 bits," mentions the report. "Three locks, August, Danalock, and Nuki can encrypt at a higher rate—AES with 256 bits." The AV-TEST engineers report that smart locks by August, Danalock, and Nuki can integrate with local Wi-Fi networks; this allows location-independent remote control using the mobile device's smart-lock app. According to the report, neither Bluetooth nor SSL-encrypted Wi-Fi connections introduce any detectable vulnerabilities. Data protection: AV-TEST's engineers measured each smart lock's privacy policy against European data-protection law. One concern centered on whether systems save more data than what is needed to operate properly. From the report: "For August, Danalock, and Noke, the testers see a need for improvement, e.g., in terms of information on stored data and its use by third parties. An adaptation to European data-protection law would easily remedy these defects." SEE: Cybersecurity in an IoT and mobile world (free PDF) (ZDNet/TechRepublic special report) Smartphone-app security: The report warns that apps are a potential target for attackers, in particular how each app manages access permissions and log files. All smart-lock systems but August and Danalock handled access and log files adequately. The engineers are concerned that August and Danalock generate comprehensive debug logs that provide clues to how the app functions. Additionally, August keeps debug logs in a protected area, whereas Danalock does not, making it possible to read the log files using tools like Android Logcat. The report suggests both August and Danalock need to improve security in this area. One serious misstep: The AV-TEST paper took issue with the smart lock from Burg-Wachter because the lock system does not require the user to change the default admin password. "A dangerous complacency, as IoT devices with unchanged default login details are easy prey for attackers," mentions the report. Overall results Each smart lock was rated on local communications, external communications, app security, and data protection, with three stars holding top honors. The following graph shows the overall results. Image: AV-TEST On a positive note, the AT-TEST report notes, "All in all, it appears the manufacturers of smart door locks, unlike many other manufacturers of smart home products, did their homework." The report concludes by saying, "The AV-TEST Institute rated five out of six of the locking systems evaluated in the quick test as having solid basic security with theoretical vulnerabilities at the most." Stay informed about IT security news, tips, and tutorials—subscribe to our Cybersecurity Insider newsletter. Subscribe Also see
Artificial intelligence (AI) is increasingly mocked as being used as a marketing term. But AI is also being used to create some legitimately useful tools. So, to beat back some of the less useful uses of the term, here are five things AI might actually be good for: 1. Farming FarmLogs is an example of complex data analysis that tracks weather, soil conditions, historical satellite imagery and helps farmers determine what kind of plant growth to expect and how to maximize crop yields. SEE: Farming for the future: How one company uses big data to maximize yields and minimize impact (TechRepublic) 2. Medical diagnosis Watson made this use of AI famous, and while you can debate its effectiveness, others like Intel are working on things like precision medicine. Machine learning can compare molecular tests with previous cases to customize treatments. Computer interpretation of medical images as an aid to diagnosis is also making rapid advances. SEE: Beware AI's magical promises, as seen in IBM Watson's underwhelming cancer play (TechRepublic) 3. Stopping predators The National Center for Missing and Exploited Children is experimenting with AI to help automate and speed up scanning websites for suspicious content. SEE: IT leader's guide to deep learning (Tech Pro Research) 4. Recruiting AI can help sort through resumes and rank candidates. Unilever used an AI called HireVue to analyze candidates' answers body language and tone, cutting down time to hire and increasing offers and acceptance rates. SEE: How to implement AI and machine learning (ZDNet) | Download the report as a PDF (TechRepublic) 5. Customer service AI assistants were made famous by smartphones, but where they really shine is providing assistance to human customer service agents. AI can be used to process natural language and route people to the right agent and even listen in and prompt agents with queries and responses. We didn't even include autonomous cars, which use all kinds of machine learning and types of AI to interpret sensors. And there are loads more. There's a lot of fog around the idea of AI these days, but if you look closely you can see some pretty good examples of the real thing. For more about artificial intelligence and other innovations, subscribe to our Next Big Thing newsletter. Subscribe Also see:
Manish Vyas, president of business communication at Tech Mahindra, spoke with TechRepublic's Dan Patterson about innovations that will be enabled by the arrival of 5G. Watch the video, or read the full transcript of their conversation below: Patterson: Help us understand, Manish, 5G we hear a lot of hype about. What's the reality of this new wireless standard? Vyas: The reality is that it does promise us to transform. You very rightly use the word digital, but my translation of digital is it promises to change the way people would live, work, and play going forward in a more significant fashion than what you saw with the previous generations. If I could just expand on that a bit, 5G is not just about the throughput and the speed and the power and the latencies, but 5G is about exciting, exciting propositions that will come our way both in the enterprise space and in the consumer domain. Given that 5G also combined with some of the other later technological innovations that are happening as we speak, for example, artificial intelligence, will just enable a certain set of use cases. It will just change the paradigm of how people communicate, how people consume experiences, or how people transact business. All of that is going to change, so I guess that's the reason why everybody is so hyped up, if I may, about 5G. Patterson: So, how? We know that the capabilities of 2G, 3G, and 4G has iterated and created new technological capabilities. What specifically about 5G enables IoT, enables high-speed mobile devices, and enables artificial intelligence? Vyas: Yes. I think it is, and all of them are related, the convergence of other software technologies that are advancing at the speed of light right now. Let's take two of them, just to build a use case, right? Let's take VR, virtual reality. Let's take IoT, which is the ability to connect the devices and harness the power of data, right? Now, combine that with the wireless advancements that will happen with the 5G technology from an access, as well as from how the data is processed. It will create use cases that have hereto not been possible. SEE: Virtual and augmented reality policy (Tech Pro Research) One of my favorite examples that I often give is think of an NFL game, and think of the tailgate parties that happen outside any stadium. There are any number of thousands of people who are perennial tailgate party goers. They don't even enter the stadium game after game, year after year, but they like to spend a lot of money outside the stadium. The experience that they will now get, imagine with the VR and with the discoverable aspect of the network that 5G does, where if you, Dan, for example, as a big fan of a certain running back of a certain NFL team, as you're partying with your buddies outside a certain stadium, you will be notified that something dramatic happened inside the stadium and with whatever device that is available at the time, which is also by the way advancing, with 5G and with the fact that you are discoverable by the network and the latencies and the availability of the network is like never before, the throughout is like never before, so the IoT on a camera or a device, the VR experience, powered by 5G, you will there and then and you will be the only person who will be able to stand almost right next to the running back and experience as the things explode and happen at the site. Just the sheer power of that use case is phenomenal, and the money that different people in the ecosystem will make out of it, including the telecom service provider, I think can be quite an interesting paradigm in my view. Patterson: What are some of the challenges or roadblocks to the rollout of 5G? Vyas: I think there are plenty. One of the biggest ones is going to be, without even getting into the technology aspect at this point, I would say is still a major part of the industry will still struggle to find the justification to invest in the capital, from a business case ROI standpoint. Now, on that one, one is also hearing as I go around the world and meet different CTOs and other executives of service providers, one is hearing that there are different ways of skinning that cat, if I may. The overall cost of 5G deployment is likely to be atmospherically high at this point, there are all indicators that the likely cost is going to be cheaper than what was in 4G, and if that happens, that itself is probably a business case-justification. SEE: IT pro's guide to the evolution and impact of 5G technology (free PDF) (TechRepublic) Of course, there is a bigger underlying assumption that the prices in the marketplace at least hold up, and if they drop, they drop only marginally and not dramatically. There's no guarantee of that, but at least that's a possibility. That's one challenge, clearly that the market is going to face. The second is going to be a more technical and execution challenge, which is the availability of the technology, the trials that need to go through in a very satisfactory fashion worldwide so that people get enough confidence to go and deploy the technology at a very large scale, which I don't think is a question of if, it is more a question of when. I guess the challenge would be more for delay rather than really making it happen. Patterson: Manish, I think that that is a great point. I wonder if you could leave us with say, the next 18- to 36-months in the rollout of 5G. Where are we in a year, year and a half, and where are we in three or four years? Vyas: I would say in three to four years time, I would be surprised if the world is not entirely enabled by 5G. When I say that with a sense of responsibility that there would be still be a certain set of companies that may not adopt it, because of how they would want to position it, which would be challenging, and they would be under tremendous pressure, but I believe that in the next three to four years, as the other technologies also evolve, the other software technologies, I believe in the next three to four years, we will see a very large scale deployment worldwide. In the imminent short-term, the 18, 20 months, I think we will see the early adopters clearly making progress. This year alone, we might see some of the major tier-one service providers in the North American continent, we will see them doing about 12 to 15 trials. By that, I mean 12 to 15 locations or cities would be 5G-enabled by the end of the year. For more about the latest tech innovations, subscribe to our Next Big Thing newsletter. Subscribe Also see
Microsoft is constantly tinkering with Windows 10, dropping in new features and swapping out old ones, but there are a few annoyances it seems unable or unwilling to fix. What ties most of the following complaints together is Microsoft's reluctance to let users choose for themselves, preferring instead to try to coerce users and control how they use their computer. Here are the five ways Windows 10 is broken that Microsoft needs to sort out. 1. Sort out the Control Panel / Settings app confusion Windows 10 adopts a rather confused approach to managing settings—splitting the options between the legacy Control Panel and the Settings app. Microsoft appears to be in the process of gradually migrating these options to the new Settings app, with each big feature update further diminishing the role of the Control Panel. However, having to juggle between the two menus is not particularly user friendly, and the changes in where settings lie is particularly aggravating for some users, as can be seen by the large number of forum posts this issue generates. You can use the Search function to locate the Settings you need, but there are still clearly a large number of users who still struggle to locate what they're looking for. 2. Give all users control over updates All Windows 10 users should be given control over when updates are applied. Currently there is no simple option for Windows 10 Home users to defer updates in the same way there is for users of Windows 10 Pro and Enterprise editions. SEE: Windows 10: Streamline your work with these power tips (free TechRepublic PDF) Users of non-Home editions can toggle options in menus to put off updates for months at the very least. However, Home users have to engage in hacky workarounds, such as setting their connection to 'Metered', which can have unwanted side effects due to Windows no longer downloading most Windows updates or Windows Store app updates. Microsoft should just relent and give the Home edition the same level of control over updates as is available to Pro users. 3. Allow users to opt out of feature updates altogether Not everyone appreciates Microsoft's twice-yearly feature updates messing with their desktop, and, for some, the smattering of new features are, at best, unnecessary. Microsoft should give all users the option to completely opt out of feature updates—the most recent being the Windows 10 Fall Creators Update— and instead only receive essential patches and fixes. As it is, users of Pro and Enterprise can defer feature updates for more than a year, so why not go one step further and let everyone opt out altogether. Does it really make sense for people who don't have the slightest interest in virtual reality to suddenly find their computer has a Mixed Reality Portal? There's even already precedent for the change, with Microsoft recently revealing that PCs with unsupported Intel Atom CPUs would not receive any feature updates post last summer's Anniversary Update. 4. Stop trying to force Bing and the Edge browser on users While makes sense for Microsoft to build an ecosystem of linked services, from both a practical and commercial point of view, it would be nice if Microsoft let users choose their sniearch engine when using Windows built-in search feature. Microsoft says that locking the Search function to Bing and its Edge browser is necessary to ensure the best possible experience for Windows 10 users. But given the relatively limited market share of Bing and Edge, it's clear that many users prefer competing products and services, so again it would be good if Microsoft would allow users to use their search engine or browser of choice. 5. Stop pushing the Microsoft Store so hard until it's better stocked Microsoft is determined to get more people to use the Microsoft Store, whether by locking Windows 10 S to using Store apps, or by releasing Store exclusives. However, despite launching in 2012, the Store's selection of apps is still fairly lacklustre, especially compared to the unfettered selection of software available for the Window desktop. Microsoft faces a classic chicken and egg problem: without the userbase, it won't get the apps, but without the apps, you can't attract the userbase. Trying to forcibly create an audience by creating an OS locked to the store isn't the answer, however, all it does is highlight just how sparse the offerings in the Microsoft Store are. Be your company's Microsoft insider with the help of these Windows and Office tutorials and our experts' analyses of Microsoft's enterprise products. Subscribe to our Microsoft Weekly newsletter. Subscribe More on Windows 10 Fall Creators Update
The iPhone X is built with gestures in mind, taking MultiTouch to the next level as it's now the main way to interact with the iPhone. Doing things as simple as double-tapping the Home button to show the App Switcher, using Reachability for items at the top of the screen, and Force Quitting apps has changed. These are the top five gestures that you need to know to take full advantage of the iPhone X. SEE: Mobile device computing policy (Tech Pro Research) 1: How to access the App Switcher on the iPhone X On previous versions of iOS hardware, accessing the App Switcher to swap to another app or force quit an app was as simple as double-tapping the home button; however, with iPhone X, the home button is no more. To access the App Switcher—whether you're in an app or on the Home Screen—you'll use the Home gesture (swipe up from the bottom), except you'll stop halfway up the screen and pause. The view will change to the App Switcher you know and love (Figure A). Figure A 2. How to force quit apps on the iPhone X Inside of the App Switcher, you may be wondering how to force quit an app, because in this new switcher, swiping up does not quit the app. To force quit an app, launch the App Switcher, then tap and hold on an app. This will enter editing mode where you can either choose the "-" button that appears in the corner of each open app, or swipe up as you would on a non-iPhone X iOS device. As we've mentioned in a previous article, you only need to Force Quit unresponsive apps. There is no need to force quit apps on a regular basis. 3. How to quickly swipe to the previously used app on the iPhone X The Home Indicator at the bottom of the screen gives you many capabilities at a single tap or swipe. Swiping from left to right on the Home Indicator will launch the previously used app from the App Switcher without the need to open the App Switcher. This feature is very useful on the iPhone and greatly improves multitasking capabilities because it lets you jump between apps quickly and efficiently. SEE: Cybersecurity in an IoT and mobile world (ZDNet special report) | Download the report as a PDF (TechRepublic) 4. How to enable Reachability on the iPhone X Reachability is an accessibility feature that can be enabled on previous iPhone models by double-tapping on the Home Button to slide the top of the screen down by half the screen to make top items more reachable while holding the device with one hand. With the Home Button gone, this feature has changed slightly and is not enabled by default. To enable Reachability in iOS 11.1.1, follow these steps. Open the Settings app. Navigate to General | Accessibility. Enable the option for Reachability (Figure B). Figure B To use Reachability once it's enabled, swipe down on the Home Indicator at the bottom of the screen. You'll see the current app slide half way down the screen, giving you easier access to reach the items at the top of the screen. 5. How to manage the iPhone X's Home Screen On an iPhone X, you may be wondering how to exit out of Home Screen arranging mode (aka jiggly mode). First, enter this mode by tapping and holding on an icon or folder on the Home Screen. The icons will begin wiggling, and you can rearrange them. To exit this mode, swipe up from the bottom like you would to exit to the Home Screen from an app. In iOS 11.1.1, a Done button appears in the top right corner of the screen in the status bar area—Tapping this button will also exit editing mode. For more iPhone tips and tricks, subscribe to our Apple Weekly newsletter. Subscribe Also see
Image: HPE Hewlett Packard Enterprise (HPE) and NASA will partner to send a supercomputer to space, the companies announced in a blog post on Monday. The "Spaceborne Computer" will be sent up to the International Space Station (ISS), first by being launched on the SpaceX CRS-12 rocket, and then sent via the SpaceX Dragon Spacecraft. It will be a year-long experiment, with aims to eventually land on a mission to Mars—a trip of the same length. Why? Advanced computing, currently done on land, could help astronauts survive in gruelling conditions by allowing for processing of information in real-time in space. The Spaceborne Computer comes equipped with the HPE Apollo 40 class systems, the blog post stated, which includes a high-speed HPC interconnect running an open-source Linux operating system. Importantly, this system could eliminate communication latencies, which can take up to 40 minutes, and can "make any on-the-ground exploration challenging and potentially dangerous if astronauts are met with any mission critical scenarios that they're not able to solve themselves," Alain Andreoli, SVP and GM of HPE's data center infrastructure group, wrote in the blog post. A computer this advanced has never run in space before, since most computing systems are not developed to survive in brutal conditions that include factors such as radiation, solar flares, micrometeoroids, unstable electrical power and irregular cooling. However, the software on this computer was developed to withstand these types of conditions, and its water-cooled enclosure for the hardware was created to help keep the system safe. The project, Andreoli wrote, has implications beyond what it can do for a voyage to Mars. "The Spaceborne Computer experiment will not only show us what needs to be done to advance computing in space, it will also spark discoveries for how to improve high performance computing (HPC) on Earth and potentially have a ripple effect in other areas of technology innovation," Andreoli wrote. SEE: How Mark Shuttleworth became the first African in space and launched a software revolution (PDF download) The 3 big takeaways for TechRepublic readers: On Monday, a blog post by HPE announced a partnership with NASA that will send a "Spaceborne Computer"—a supercomputer—on a year-long experiment to the ISS. The project is intended to eventually end up on a voyage to Mars—which also takes a year—with the intention of helping astronauts perform high-level computation in space, which could eliminate the current lag-time in communication between space and Earth. The project is intended to have broader implications for the kind of advanced computing that can be done in space. Find out The Next Big Thing from TechRepublic. Subscribe Also see
Apple's FileVault encryption program was initially introduced with OS X 10.3 (Panther), and it allowed for the encryption of a user's home folder only. Beginning with OS X 10.7 (Lion), Apple redesigned the encryption scheme and released it as FileVault 2—the program offers whole-disk encryption alongside newer, stronger encryption standards. FileVault 2 has been available to each version of OS X/macOS since 10.7; the legacy FileVault is still available in earlier versions of OS X. This comprehensive guide about Apple's FileVault 2 covers features, system requirements, and more. We will update this article if there's new information about FileVault 2. SEE: Encryption Policy (Tech Pro Research) Executive summary What is FileVault 2, and how does it encrypt data? FileVault 2 is a whole-disk encryption program that encrypts data on a Mac to prevent unauthorized access from anyone that does not have the decryption key or user's account credentials. FileVault 2 is a whole-disk encryption program that encrypts data on a Mac to prevent unauthorized access from anyone that does not have the decryption key or user's account credentials. Why does FileVault 2 matter? Encryption of data at rest or stored on a disk is often the last resort to ensuring that data is protected against unauthorized access. The recent high-profile security breaches make it even more important to know about encryption programs such as FileVault 2. Encryption of data at rest or stored on a disk is often the last resort to ensuring that data is protected against unauthorized access. The recent high-profile security breaches make it even more important to know about encryption programs such as FileVault 2. Is FileVault 2 available to all macOS users? All macOS users can enable FileVault 2 to protect their data. Some users running more recent versions of OS X can also enable disk encryption, while others using older versions of OS X will only be able to utilize legacy FileVault, which encrypts just their home folder. All macOS users can enable FileVault 2 to protect their data. Some users running more recent versions of OS X can also enable disk encryption, while others using older versions of OS X will only be able to utilize legacy FileVault, which encrypts just their home folder. What are the pros and cons to using FileVault 2? Some of the pros include it supports legacy hardware, and deployment may be locally or centrally managed by users or the IT department. One con is enabling FileVault 2 can have a negative impact on I/O performance of approximately 20-30% of modern CPUs. More pros and cons are detailed in this article. Some of the pros include it supports legacy hardware, and deployment may be locally or centrally managed by users or the IT department. One con is enabling FileVault 2 can have a negative impact on I/O performance of approximately 20-30% of modern CPUs. More pros and cons are detailed in this article. What are alternatives to FileVault 2? The main competitors are VeraCrypt, BitLocker, GnuPG, LibreCrypt, and EncFS. The main competitors are VeraCrypt, BitLocker, GnuPG, LibreCrypt, and EncFS. How can I get FileVault 2? FileVault 2 is baked in to all versions of macOS and supported versions of OS X. The encryption program is turned off by default, though it's easy to enable. Additional resources What is FileVault 2, and how does it encrypt the startup disk on Macs? FileVault 2 is an encryption program created by Apple that provides full-disk encryption of the startup disk on a Mac computer. By utilizing the latest encryption algorithms and leveraging the power and efficiency of modern CPUs, the entire contents of the startup disk are encrypted, preventing all unauthorized access to the data stored on the disk; the only people that can access the data have the account credentials that enabled FileVault on the disk, or possess the master recovery key. By enabling FileVault 2's whole-disk encryption, data is secured from prying eyes and all attempts to access this data (physically or over the network) will be met with prompts to authenticate or error messages stating the data cannot be accessed—even when attempting to access data backups, which FileVault 2 encrypts as well. Additional resources Why does FileVault 2 matter? FileVault 2, in and of itself, cannot prevent users from attacking your system or otherwise exfiltrating the encrypted data. The encryption program is not a substitute for proper physical, logical, and data security standards, but rather a part of the overall puzzle that makes up your device's security. Data encryption is often seen as the last resort because, if all other security features in place are compromised, encrypted data will still be unreadable by everyone except people that have the decryption key, or those that can brute-force their way past the algorithm, which is easier said than done. SEE: All of TechRepublic's cheat sheets and smart person's guides If the encryption standard in place is properly implemented and uses a strong, modern algorithm, and the recovery keys are not accessible or consist of a long, random key space, the attackers will have their work cut out for them. If the attackers gain access to the data sitting on the disk, they may be able to copy it, take it off your network, and even attack it directly, but they'll still be at an impasse if they cannot crack the encryption. And if the attackers cannot crack the encryption, your data will remain unreadable, and subsequently, of little to no real use or value. Additional resources Is FileVault 2 available to all macOS users? Users running OS X 10.7 (Lion) or later, all the way through the current version of macOS 10.13 (High Sierra), may enable and fully utilize the full-disk encryption capabilities of FileVault 2 on their desktop or laptop Mac computers. By default, the feature is disabled; however, it only takes accessing the System Preferences and clicking the Turn On FileVault 2 button to enable the feature and encrypt your whole disk. Encryption may be enabled by the user or managed by the administrators for company-owned devices. Administrators have set policies via Profile Manager and/or scripts that will enable FileVault 2 during deployment and implement institutional recovery keys that the company manages in order to recover encrypted data per device, if needed. SEE: Essential reading for IT leaders: 10 books on cybersecurity (free PDF) (TechRepublic) Once FileVault 2 is enabled, only the user with administrative privileges that enabled FileVault 2 with their account may decrypt the drive's contents. Additionally, a master recovery key is created during the initial process; users with either of those keys may be the only ones to decrypt the volume and read the contents of the drive. Additional resources What are the pros and cons to using FileVault 2? The pros to using FileVault 2 It's a native Apple solution that is designed by Apple for Apple computers. FileVault 2 supports legacy hardware, even for devices that are no longer officially supported by Apple. Deployment of FileVault 2 may be locally or centrally managed by users or the IT department. Whole-disk encryption works to safeguard all data stored on disk now and in the future. Backup of encrypted data works seamlessly with Time Machine to create automated backup sets. Disks encrypted with FileVault 2 must first be unlocked by user accounts that are "unlocked enabled"; these are typically accounts with administrative privilege, preventing non-admin accounts from accessing the disk's contents, regardless of the ACL permissions configured. FileVault 2 uses a strong form of block-cipher chain mode, XTS, based off the AES algorithm using 128-bit blocks and a 256-bit key. The cons to using FileVault 2 Legacy FileVault (or FileVault 1) does not encrypt the whole-disk—only the contents of a user's home folder. This affects legacy hardware that do not support the features in FileVault 2. Backing up encrypted data with Time Machine can only be done when a user is logged off of the session. For on-the-fly backups, the destination path must be a Time Machine Server, which requires macOS Server to perform online backups. The encryption passphrase used to encrypt the disk is the same as the end-user's password that enabled FileVault 2. If the password becomes compromised, the disk may be encrypted and data may be compromised. Enabling FileVault 2 can have a negative impact on I/O performance of approximately 20-30% of modern CPUs, and it noticeably worsens performance on older processor hardware. If the passphrase or recovery key must be changed, the entire volume will need to be decrypted and have the encryption process run again with the new key. Any device with FileVault 2 enabled must be unlocked by an admin credentialed account prior to being accessed or used by a non-admin account. If the device is not unlocked, non-admin accounts will not be able to use the computer until it is first successfully unlocked. Individual files, folders, or any other kind of data cannot be encrypted on the fly. Only data that resides on the local disk or FileVault 2-encrypted volumes may be encrypted in their entirety. Additional resources What are some of the alternatives to FileVault 2? VeraCrypt is a free, open source disk encryption software that provides cross-platform support for Windows, Linux, and macOS. It was derived from TrueCrypt, which was a full-disk encryption application that discontinued support by its creators after a security audit revealed several vulnerabilities in the software. Having acquired the use of TrueCrypt, VeraCrypt forked the former app and corrected the vulnerabilities, while adding some changes to strengthen the way in which the files are stored. VeraCrypt creates a virtually encrypted disk within a file and mounts it as a disk that can be read by the OS. It can encrypt the entire disk, a partition, or storage devices, such as USB flash drives and provides real-time on the fly encryption, which can be hardware-accelerated for better performance. It also supports TrueCrypt's hidden volume and hidden operating system features. BitLocker is Microsoft's full-disk encryption featured in supported versions of Windows Vista and later. Using default settings, BitLocker uses AES encryption with XTS mode in conjunction with 128-bit or 256-bit keys for maximum protection, especially when leveraged with a TPM module to ensure integrity of the trusted boot path, which prevents many physical attacks and boot sector malware from compromising your data. When used on a computer in an Active Directory environment, BitLocker supports key escrow, which allows the Active Directory account to store a copy of the recovery key. In the event that data needs to be recovered, administrators may retrieve the key. GnuPG is based on the PGP encryption program created by Phil Zimmermann, and later bought by Symantec. Unlike Symantec's offering, GnuPG is completely free software and part of the GNU Project. The software is command-line based and offers hybrid encryption by use of symmetric-key cryptography for performance, and public-key cryptography for the ease of exchanging secure keys. While the lack of GUI may not be for everyone, the program's flexibility allows for signed communications, file encryption, and, with some configuration, disk encryption to protect data. Dubbed the universal crypto engine, GnuPG can run directly from the CLI, shell scripts, or from other programs, often serving as a backend for other applications. LibreCrypt is a transparent full-disk encryption program that fully supports Windows and contains partial support for Linux distributions. It is open source and has an online community of users that are committed to resolving issues and introducing new features. Often cited as the most easy to use encryption program for Windows, it can create encrypted containers as well, mounting them as removable disks in Windows Explorer for easy access. It addition to the multitude of supported encryption and hashing standards and modes, it also supports smart cards and security tokens to authenticate users, and decrypts data at the file level, partition, or for the entire disk. EncFS is an encrypted filesystem that runs in the user-space, using the FUSE library. The FUSE library acts as an interface for filesystems in user-space that allows users to mount and use filesystems not natively supported by the host OS. FUSE/EncFS are open source releases and support Linux, BSD, Windows, Android devices, and macOS. It is also available in a number of languages, as it has been translated by community members. With active community support on GitHub and regular updates, EncFS offers users the ability to create a filesystem that can be mounted and used to store secure data files, and then it may be unmounted to protect against offline attacks and unauthorized user access. Additional resources How can I get FileVault 2? FileVault 2 is in all versions of OS X from 10.7 through macOS 10.13—it just needs to be enabled, as the service is turned off by default to allow end users to perform the initial setup process, which allows them to create a master recovery key. This key will act as a backup in the event that they become locked out of their account and must recover data via an alternate path. Users of OS X prior to 10.7 may use Legacy FileVault, or FileVault 1 (the initial offering of the encryption application), which only encrypts a user's home folder and not the entire disk. This must be enabled per user on that device and will still leave any data not stored within an encrypted home folder available to unauthorized access. The good news is that as long as your Apple computer supports a recent version of OS X or the modern releases of macOS, you can upgrade your Mac's operating system at anytime to a newer version to enjoy the benefits of FileVault 2's enhanced security. Additional resources For the latest IT security news, tips, and downloads, subscribe to our Cybersecurity Insider newsletter. Subscribe
Google Fiber showed new life in 2017, after a near death experience in late 2016. The fiber internet pioneer launched in three new cities—Huntsville, AL, Louisville, KY, and San Antonio, TX—this year. It also began to heavily rely on shallow trenching, a new method of laying cables, to expedite the construction process. SEE: Photos: How Google Fiber is using 'shallow trenching' to outbuild its gigabit rivals "We're very pleased with the response from residents in these markets—along with our other existing Google Fiber cities, where we worked hard throughout the year to bring Fiber service to even more people in many more neighborhoods," a Google Fiber spokesperson told TechRepublic. The comeback happened after a construction halt and the CEO stepping down in October 2016, which left some wondering if Fiber was on its last breath. SEE: Internet and Email usage policy (Tech Pro Research) But 2017 wasn't entirely a year of redemption. In February, hundreds of Fiber employees were moved to new jobs at Google. And Gregory McCray left the role of CEO in July after only holding the position for five months. And internet experts still have their doubts. Chris Antlitz, a senior analyst at Technology Business Research, labelled Fiber's year as "not very good." Jim Hayes, president of the Fiber Optic Association, called Google Fiber a "very distant player" in the fiber market. However, Antlitz added that, for Alphabet—the parent company of both Google and Google Fiber—that means they're just not growing as fast as they wanted to. Google Fiber has still had an impact this year, he said. Fiber set a new bar for broadband by showing incumbent internet service providers (ISPs) that it is economically feasible to bring 1 gigabit internet to consumers, Antlitz said. Since Google Fiber led a connectivity renaissance in 2011 when it launched in its first city, Kansas City, KS, top telecom providers have been in an arms race to upgrade their broadband pipes to accommodate 1 gigabit, Antlitz said. Google Fiber's presence in the market has caused competition that has forced other fiber providers like Verizon and AT&T Fiber to offer cheaper, faster service. Adding a second provider to a market can reduce prices by around one-third, according to a study by the Fiber to the Home Council. SEE: Google Fiber 2.0 targets the city where it will stage its comeback, as AT&T Fiber prepares to go nuclear AT&T has been particularly competitive, analysts say. They've been expanding in current and prospective Google Fiber cities, including adding new neighborhoods in San Antonio months before Google Fiber arrived. In Louisville, AT&T sued the Louisville Metro Government over its "One Touch Make Ready" ordinance, which allows Google to use existing poles to install its technology without permission from the telecom company that owned the poles. The lawsuit was dismissed in August, and AT&T said it wouldn't appeal the dismissal in October. A TechRepublic investigation found that AT&T has talked a big game about its buildout in Louisville, but has dragged its feet in rolling out gigabit internet to customers and has signed up very few households. It's this kind of activity that has gotten AT&T's gigabit strategy labeled "fiber-to-the-press-release." It's unclear what Google Fiber's 2018 will look like. The company's map of Fiber cities doesn't yet list an upcoming city where Fiber will be heading next. Six potential cities—Portland, OR, San Jose, CA, Los Angeles, CA, Dallas, TX, Oklahoma City, OK, Tampa, FL, Jacksonville, FL, Phoenix, AZ—are listed as places the company is exploring. SEE: Louisville and the Future of the Smart City (a ZDNet/TechRepublic special report) William Hahn, an analyst at Gartner, said going to even one-third of those cities next year would be impressive for Google Fiber. However, he said he doesn't foresee a shift in the market in the next two years. The next big gamechanger? The rollout of 5G, which will give providers more wireless to potentially play with in cities and hard-to-reach rural areas. In five US cities in 2018, Verizon plans to roll out 5G fixed wireless, which will compete directly with fiber in speed and low latency. Antlitz said it's probable that Fiber will collaborate with incumbent ISPs to target unserved and underserved communities, including those in emerging markets and harder-to-reach rural spots. "I think they don't want to be an ISP," Antlitz said. "They're trying to prove a point." The point? That faster, 1 gigabit internet can be affordable—and that the existing ISPs just needed a push. "They got what they wanted," Antlitz added. Stay informed, click here to subscribe to the TechRepublic Next Big Thing newsletter. Subscribe Also see
Spend more than an hour in a meeting with any major software company and you're bound to hear the buzzword "hyperconverged infrastructure," but what is it, and why should you care? Industry analyst Zeus Kerravala explained it for us in a question-and-answer session. We played the role of skeptic. SEE: Virtualization policy (Tech Pro Research) TechRepublic: We think we understand what hyperconverged infrastructure means, but how would you explain it? Zeus Kerravala: "It's kind of a weird term. There was already a converged infrastructure market [lacking the software aspect] when this technology came around. Hyperconverged platforms are turnkey products that include all the hardware and software one needs to run a contained little data center in a box. ... When you look at running data center infrastructure there's a lot of different choices for buyers. If you use Cisco networking, EMC storage, and Dell computing, which is a pretty standard thing, there's over 800 configurations. [In HCI] the vendor's done all the heavy lifting. They're not plug-and-play... it's data center technology, nothing's ever going to be plug-and-play. But customers have told me the deployment time for these is days vs. months if you're trying to cobble it all together yourself." SEE: The cloud v. data center decision (ZDNet special report) | Download the report as a PDF (TechRepublic) TechRepublic: Do you think most corporate sysadmins and CIOs understand this? Zeus Kerravala: "I'm not sure the CIO does. I think technology has been somewhat niche. It's been used primarily for virtual desktop deployments. Those are workloads that tend to be demanding... unified communications are a likely next thing. I don't really understand where the 'hyper' came from, to be honest with you." TechRepublic: Most good ideas in information technology are cyclical. How much of this is truly novel and how much is just a new name? Zeus Kerravala: "We used to have converged platforms a long time ago, and we called them mainframes. The reason the hyperconverged market exists is to simplify the deployment of all the stuff we need to run data centers." TechRepublic: Why is this happening now? Zeus Kerravala: "I talk to CIOs. More and more, CIOs are less concerned about the technical aspects of running stuff. They want stuff to work so they can run the business. There's a theme of digital transformation that's cutting across all businesses. If you talk to a CEO about running a business, it's about speed today. It's Darwinism." SEE: Digital transformation: A CXO's guide (ZDNet special report) | Download the report as a PDF (TechRepublic) TechRepublic: What are the risks of changing from traditional to hyperconverged infrastructure? Zeus Kerravala: "I haven't really talked to anyone who hasn't had a good experience [except] using the technology for the wrong workloads. If you're going to run hyperconverged infrastructure, the development has to be done on a product that's at least similar from a hardware perspective." TechRepublic: What about hardware upgrades? Zeus Kerravala: "Applications that have the most demanding hardware requirements, I'd probably keep those on a platform that I have a little bit of control over, and I can upgrade the processors when I need to. If you wouldn't run it on a virtual machine, then certainly don't run it on this." TechRepublic: Which companies are offering hyperconverged infrastructure the right way? Zeus Kerravala: "The market leader right now in terms of brand and share is still Nutanix. They've done a lot of work in software. The one to watch is Dell/EMC but for specific use cases [such as with VMware's vSphere]. If your hypervisor is Microsoft or Citrix, then I might look at a different platform... 8kpc is a startup. They've done a lot of work on the hardware optimization phase." TechRepublic: Which companies aren't doing so well at it? Zeus Kerravala: "I think HPE is bit of a confused company right now... Lenovo is another one that I've expected more of by now." TechRepublic: What is your advice for customers considering a hyperconverged infrastructure product? Zeus Kerravala: "There's a lot of products on the market and they all kind of pitch the same message. But the performance from box to box, from vendor to vendor, is going to be quite different depending on what you're running on it. Do your own testing. How does it work in a hybrid cloud situation? I'd also want to know from a roadmap perspective about flash storage, 100Gb Ethernet, and then NVMe." TechRepublic: What else should people know? Zeus Kerravala: "Try to have a good understanding of what it means to the operational team. Things may be easy to deploy initially but take a look at the ongoing management. That's really going to determine whether you get value out of these products or not." For more networking, storage, and enterprise hardware news, subscribe to our Data Center Trends newsletter. Subscribe Also see
Building a slide deck, pitch, or presentation? Here are the big takeaways: Business professionals could gain from wearing a wearable device, but some have more fitness features than business features. Professionals should look for wearables with different ways to stay connected, basic activity tracking, and a fashion-forward look. Wearables, like smartwatches and fitness trackers, are popular with business professionals, and for good reason. The devices can collect data and provide insights, allowing wearers to track their fitness and productivity to reach their goals faster. But some devices may not work the best for business professionals. They may not have enough ways to stay connected in terms of communications, or they may be too focused on physical goals like meditation. And some may stand out too much for professionals in formal business attire to feel comfortable wearing them. SEE: Wearable Device Policy (Tech Pro Research) How to choose a wearable for business Connectivity is the first thing to consider when selecting a wearable for business purposes. Some options can connect to a smartphone, while others work outside of a cellular network. Some professionals need constant access to business communication, and selecting a wearable that works in tandem with a smartphone can provide that. Think about what you want to accomplish with your wearable. Do you want to just be able to see notifications, or do you need to be able to answer texts and emails as well? What about activity tracking? It can help you understand how you spend your hours to find ways to become more productive, or you can use it for fitness purposes as well. Apps and integrations can be helpful for business professionals, so check out what is available. Some, especially ones connected to a smartphone, have multiple options, while others have fewer choices. Integrations can streamline things between your wearable and other devices, potentially making you more efficient. Apps can offer new ways to boost productivity. Mindfulness features are also helpful, especially in high-stress jobs or industries. A sleep tracker can help understand if you're sleeping long or well enough. Finally, looks aren't everything, but some wearables can stand out when worn with business attire. More wearables are adopting the look of traditional watches, with leather bands and sleek faces. A wearable won't do much if you don't wear it because of its look. How to choose a wearable for fitness First, you should consider if you want one device to carry from work to the gym, or if you want separate options. Some popular devices, like the Apple Watch, can work for both environments due to the amount of features and connectivity options. Much like with business wearables, you need to consider what exactly the fitness tracker needs to do. Most will offer the same baseline metrics, but others offer more analytics. How much insight do you want into your workouts? Some only need simple step tracking, but someone training for a marathon may need more detail. In what physical environment are you going to use the device? Whatever the answer, the tracker should be ready. For example, if you're a swimmer, you obviously need a water-resistant device. Runners may want a device with a built-in GPS so they can track their runs. You should also consider the tracker's connected app, if it has one. What analysis and insights can you get on the app? Does it have features to track food and water intake? Like with business wearables, integrations may also be important, so review the offerings. Stay informed, click here to subscribe to the TechRepublic Tech News You Can Use newsletter. Subscribe Also see
You might want to mass delete email from Gmail for many reasons: To remove non-work-related messages from an account, to achieve "inbox zero" as part of a personal productivity effort, or—more mundanely—to reduce the storage space used by attachments. Some people pursue #NoEmail—and start to treat email as an ephemeral communication channel instead of a permanent archive. Before you start to mass delete items from Gmail, I recommend that you export your current email data. To do this, use Google Takeout at https://takeout.google.com. Choose the "Select None" button, then scroll down the page to Mail. Move the slider to the right of Mail to "on." (You may export just some of your email: Select the down arrow to the left of the slider, then choose one—or more—Gmail labels to select items tagged with those labels to export.) Select Next at the bottom of the page, then choose the format, file sizes, and storage action for your export. Wait to start your deletions until you've either downloaded or verified that your exported email has been stored. (Note: If you use G Suite, an administrator has the ability to disable access to Takeout. If that's the case, talk with your administrator about backup before you begin.) After you backup, cycle through the following four steps to move sets of email to the Gmail trash. 1. Search I've found that typing search terms into the Gmail search box in the desktop Chrome browser to be the most efficient way to find and select sets of emails. And while Google gives you a long list of search operators, I suggest you start with the following: Email address . Enter to: or from: followed by an email address to find all of the email sent to or received from an address. . Enter to: or from: followed by an email address to find all of the email sent to or received from an address. Subject . Enter subject: followed by a word (or a phrase in quotes) to find all email that contains the word or phrase specified. . Enter subject: followed by a word (or a phrase in quotes) to find all email that contains the word or phrase specified. Date . While there are several time-search options, try before: or older_than: first. The first locates items prior to a specified data, while the latter locates item older than a certain number of days, months, or years (e.g., 3d, 1m, or 7y for 3 days, 1 month, or 7 years, respectively) from the current date. . While there are several time-search options, try before: or older_than: first. The first locates items prior to a specified data, while the latter locates item older than a certain number of days, months, or years (e.g., 3d, 1m, or 7y for 3 days, 1 month, or 7 years, respectively) from the current date. Size. Locates email larger than a specific size. For example, larger:20M finds items larger than 20Mb. Often a simple search may be all you need to locate a set of email you no longer need. For example, you might not need to keep receipts from some vendors (email address), accepted calendar invitations (subject), email older than 3 years (date), or large files (size) stored elsewhere. 2. Review and refine Review the search results to see if you wish to keep email found with a simple search. If no, move on to step no. 3. If you see emails that you wish to keep among the results, you'll need to refine your search. You can combine multiple search terms. For example, search for both an email address and a date: from:[email protected] older_than:1y This would find items older than one year from the current day. Or, add a subject as well, to narrow the results further: from:[email protected] older_than:1y subject:"Weekly meeting" You may use the - character to exclude a search term (or terms). For example: to:[email protected] older_than:1y -subject:"Quarterly review" This would find items older than a year sent to a specific email address, but would exclude any emails with the subject "Quarterly review." If more than one screen of results is indicated, select the arrow in the upper right area to review additional screens of email search results. Refine and review the results until you're confident that all the email found by your search is email you wish to delete. 3. Select / Select All Select the box at the top of the column above your email search results to select all of the email displayed. If your search returns more email than is displayed on the current screen, you'll see a message above the list of email that gives you the option to "Select all conversations that match this search." Click the words to select all conversations that match your search terms. 4. Move to Trash Select the trashcan icon to delete the selected email. Repeat for various terms Repeat your search to find, select, and delete as many sets of email as you wish. When I help people get control of their email we often search for things such as: Old promotional emails, newsletters, and updates Email no longer needed from specific clients, vendors, or colleagues System status notices (e.g., update notifications and system down/up notifications) Outdated social media or account sign-in notifications Email related to prior jobs (including paid and volunteer roles) Tip: Use a label to exclude a set from a search Often, I find it helpful to label a set of email so that I can always exclude that set of email when I work through the email deletion process. For example, you might want to keep all email from a specific person (or several people). To do this, first create a Gmail label, such as "Never delete." Then, search for a colleague's email address. Select all email with that person, then select the label icon, choose the label you created (e.g., "Never delete"), then select "Apply" at the bottom of the column. Repeat this process for as many criteria as you wish Then, when you do searches, always exclude labels that match the selected set. For example: older_than:1y -label:"Never delete" This would return all emails older than a year, while excluding all emails labeled "Never delete." Optional: Delete Trash At this point, you're done. Gmail will remove items left in the trash after 30 days. If you really want things deleted now, you can always navigate to the trash, select all items (and select all items in the trash), then choose "Delete forever." × e-gmail-auto-delete-vault-search.jpg G Suite controls A G Suite administrator has at least two significant options available to manage mail, as well. First, an administrator can set a Gmail auto-delete policy (from admin.google.com, sign-in, then Apps > G Suite > Gmail > Advanced settings > Compliance: Email and chat auto-deletion) for messages to either be moved to trash or deleted after a specified number of days. The administrator also may specify that emails with a specific label (or labels) will not be auto-deleted. Second, a G Suite administrator can configure Google Vault, which gives the organization a sophisticated set of controls to preserve, search, and export email communications for legal and/or compliance purposes. (Vault is included with Business and Enterprise edition licenses.) Your thoughts? Do you maintain a pristine, close-to-zero Gmail inbox? Or do you archive everything forever? How often do you delete sets of messages from Gmail? Let me know in the comments — or on Twitter (@awolber). Subscribe now to our Google Weekly newsletter to stay informed of useful Google news and tips! Subscribe Also see
China's population, which in 2016 had 793 million urban residents and 590 rural residents, is spread out over a land mass of 3.7 million square miles. By 2010, 93% of the rural population had healthcare coverage, but providing rural medicine and timely healthcare to rural regions persist. This is where analytics can make a difference. "We wanted to take analytics, artificial intelligence and deep learning technologies and use them to better understand different medical conditions, how to diagnose them, and how to treat them," said Kuan Chen, founder and CEO of Infervision, a Chinese artificial intelligence and deep learning company that specializes in medical image diagnosis. Analytics, artificial intelligence, and deep learning are put into play by analyzing medical images and reports on different pathological conditions, and then coming up with different models and sources of treatment and medical interventions based on common patterns that are assembled from studies of thousands of patients in China's urban hospitals. "These models use deep learning to 'learn' from the data and continuously improve their diagnostic capabilities," said Chen. The first disease that Chen targeted was lung cancer, with the software being able to locate hard-to-detect or hidden nodules in the lungs that could prove to be cancerous. Now the task at hand is providing a similar diagnostic and medical intervention tool for strokes, which can especially be useful in rural areas where qualified medical practitioners are scarce. SEE: IT leader's guide to deep learning (Tech Pro Research) "By studying the stroke condition of a patient, the analytics can determine what is the optimum time table for treatment, and how aggressively the stroke should be treated," said Chen. In short, the AI and analytics become a second pair of eyes for radiologists against which they can cross-check their own diagnoses. How important is this? "In many rural areas in China, there are no trained radiologists who can help stroke victims," said Chen. "And in other areas of the world, like the US, radiologists make an average of $375,000 a year, so they are very expensive." Chen says that the feedback he gets from hospitals is that younger radiologists and medical practitioners rely heavily on AI, while older practitioners prefer to use it as a second opinion that they cross-check against their own. "In a stroke, you want to respond to the condition as quickly as possible," said Chen. "It might take 30 to 35 seconds in a standard process to generate a report on the condition so treatment can be determined. With our tool, that time is cut to less than three seconds." The use of deep learning and expanded analytics also expand the spectrum of diagnosis, which can lead to better results. "In one non-stroke case that involved diagnosis and treatment of a bone fracture and a degenerated area of bone, the standard approach is to treat the affected area itself," said Chen. "With analytics and AI, a system can focus on different areas of the body that are far removed from where the problem is to see if these other areas could be affecting the condition. If it is a problem that is being generated far from the fracture itself, the analytics allow us to treat causes of the condition, and not just symptoms." SEE: How to implement AI and machine learning (free PDF) (ZDNet/TechRepublic special report) Here are some best practices hospitals and clinics can adopt as AI and deep learning tools evolve: Deploy the tool where help is needed most If there is an acute shortage of medical practitioners in a specific region, analytics and AI can help in situations like stroke intervention and treatment, and the chances for success for patients will improve. Use the tool for training Radiologists and medical practitioners must develop knowledge and experience before they can become expert diagnosticians. An analytics and deep learning tool can assist in the training process because users can compare their own findings against what the system finds in numerous scenarios. Learn to expect the unexpected You might think you are going to treat one condition and end up treating another. The bone fracture that Chen mentioned, where the system actually found the causal problem in a different area of the body, is a prime example. This is why medical practitioners should keep their minds open. Never forget that AI and deep learning tools are still developing Just because a system uses AI, deep learning and analytics doesn't mean that is it always right. Medical practitioners should use these systems as assistants and not as undisputed authorities, because there are some areas where a machine can't be a replacement for human thought and reasoning. For more about AI, big data, and the latest innovations, subscribe to our Next Big Thing newsletter. Subscribe Also see:
Building a slide deck, pitch, or presentation? Here are the big takeaways: A new stripped-down Windows 10 build called "Lean" was discovered in the latest Insider preview of Windows 10. It lacks many Windows 10 features and has a 2GB smaller installation size. Microsoft hasn't said what Lean's purpose is, but it appears to be for lower-end machines or those that need to be locked down from user tampering. Microsoft's latest Windows Insider skip ahead build contains a new version of Windows 10 called Windows 10 Lean, which cuts the installation size by 2 GB. Discovered by Twitter user Lucan, Windows 10 Lean cuts out several Windows 10 features: desktop wallpaper is disabled by default, the Microsoft Management Console and registry editor are missing, drivers for CD and DVD drives can't be installed, Microsoft Edge doesn't allow downloads, and Microsoft Office is missing as well. At first glance it may seem that Windows 10 Lean is an alternative to Windows 10 S (which only allows app installation from the Windows Store), but Lucan quickly dismissed that by saying that those restrictions don't apply, as he was able to run applications normally locked to Windows 10 S users. What is Windows 10 Lean's purpose? The Twitter discussion growing up around Lucan's discovery of Windows 10 Lean is devoid of one important thing: an explanation as to its purpose. Mary Jo Foley from TechRepublic sister site ZDNet speculated that it was a version of Windows 10 S for home or enterprise, but Lucan said he doesn't think that's the case. Windows 10 S, he said, is more like a set of restrictions on top of a standard Windows 10 install, which Lean definitely isn't. Another Twitter commenter said it may be ideal for educational use, as schools often have older computers that need a smaller install. Add to that the heavy restrictions on what a user can do in Windows 10 Lean (no downloading, no Regedit, etc.) and you have a relatively resilient OS that has lower-end hardware requirements. SEE: Securing Windows policy (Tech Pro Research) WIndows 10 Lean could make a great OS for any systems that see a lot of user contact: Loaner machines, kiosks, sales floor demos, and other specific roles would be a great fit for Lean. Anyone who has ever managed computers that see a lot of public contact knows they have to be locked down, and Windows 10 Lean seems designed for that particular purpose. There are a lot of things users can't do in a base install of Lean, leaving it up to an administrator to pre-load an installation with certain software or settings that would be largely unalterable. Given the limits of WIndows 10 Lean it's likely that it's designed to save space, be a quick install, and be customized as an image prior to being installed. Lean images could be configured to suit specific roles, and users would be largely unable to damage them. We won't know what Windows 10 Lean is really designed for until Microsoft says so, but If you want to check it out now you can do so in Redstone 5 Insider preview build 17650, available now to Windows Insider members. Get a roundup of the biggest Microsoft news of the week in your inbox: Subscribe to our Microsoft Weekly newsletter. Subscribe Also see
Image: Essential Before I made the purchase of an Essential Phone, like everyone else, I scoured through review after review. My goal was to sift through the chaff and find something that would key me into understanding what was at the core of the PH-1. It didn't take long for a common thread to bubble up from the surface. That thread? A less-than stellar camera. The good news for me was that I don't consider a smartphone's primary function to be taking photos, so having the best camera available wasn't a top priority. With that out of the way, it seemed the PH-1 met all of my needs, and did so at a price point that was right on the money. And thus, I made the purchase, and set aside my OnePlus 3 to embark on a journey with the newest underdog. As many of you know, I'm a big fan of the underdog. I've been using Linux as my primary OS for decades, so I'm accustomed to watching a platform scrape and dig for attention and respect. However, after just two weeks of use, I'm convinced the PH-1 shouldn't be considered an underdog but a top dog, in a class by itself. That's not to say it's perfect because it's not, but no device is (and anyone who believes otherwise is kidding themselves). Now that I've had plenty of time to experience Essential's first foray into the smartphone market, I feel like I have plenty to say about the device. And, with that said, let's dive into the good and the bad. SEE: Mobile device computing policy (Tech Pro Research) The bad I thought I'd flip the script and start with the bad. Why? Because there isn't much in the way of bad to be found. In fact, I have to trick myself into thinking "Maybe photos are more important than I originally thought!" before I can really come up with something negative of note to say about the PH-1. It has been well documented that the Essential Phone's camera is lackluster. The software is a bit slow, and the low-light photos are far from great. The selfie camera also suffers from the software issues that hinder the main camera. However, after four updates (that's right, four updates since I received the device, more on that in a bit), I've watched the camera app improve exponentially. It's still not nearly as good as the Pixel 2 camera app for instance, but it's passible. For anyone that doesn't consider photos to be a priority, the camera app will suffice. The only other nit to pick is that the gorgeous case is the biggest fingerprint magnet I've ever seen. I'm constantly wiping the back down. Had this phone not been nearly as beautiful as it is, the fingerprints wouldn't concern me. But the PH-1 is one of the most elegant smartphones I have ever held in my hand, so my propensity is to keep it clean. Essential should be cleaning up in awards for hardware design—of that there is no doubt. Finally, there is no headphone jack. That's okay for two reasons: Bluetooth headphones have come a very long way, and Essential included the necessary dongle so users won't have to toss their standard headphones or other devices that might make use of that common interface. And that's it for the bad. The good There is almost too much to say here , so I'm going to boil it down to a few "essential" items. First and foremost: the design. As I said, it's gorgeous. But even the titanium sides and ceramic back take a seat to the display. No it's not the most cutting edge (Essential went with an LCD display, instead of the more popular, flagship level, OLED option), but the edge to edge is absolutely beautiful. Essential essentially proved that a bezel-less device is very much possible and their home screen launcher makes perfect use of the screen real estate (Figure A). Figure A The one downfall is that not every app found in the Google Play Store makes use of that full screen. To Essential's credit, so far I've only found one app that didn't—Discogs (Figure B). Figure B Beyond the hardware, there's the stock Android (shipping with Android 7.1.1). If you're looking for nothing but essential Android, the PH-1 delivers. Upon arrival the device included the bare minimum software. There was zero bloat. Couple that with the speedy Qualcomm Snapdragon 835 processor paired with 4GB of RAM and 128GB of internal storage, and that barebones Android runs as smoothly as any flagship device. Period. Apps install quickly, start instantly, and run smoothly. The PH-1 easily stands toe-to-toe with my wife's Samsung Galaxy S8. One very crucial aspect many users will appreciate is how quickly the PH-1 receives the Android Security Patch. Since initially turning on the device, my PH-1 has received four Android updates. Even though the device is running Android 7.1.1, it enjoys the most recent Security Patch (Figure C). Figure C The combination of beautiful and powerful hardware, and up-to-date barebones software make for an incredible experience. Who's the ideal PH-1 user? Let's make this easy: If you're tired of devices shipping with bloat—and who isn't—the PH-1 might be the ideal device for you. If you're constantly on-the-go, the titanium case is strong enough to withstand your brutal abuse. If you're a fan of the underdog, the PH-1 is the perfect smartphone for you. The ratio of price to performance will absolutely blow you away. No other smartphone, regardless of manufacturer, enjoys this level of form and function. Essential has every right to stand with the leaders in the industry. It's every bit as cool as the iPhone X and as flexible as any Android device—all without the price found with most flagship smartphones. If you like your devices to turn heads, the PH-1 is the perfect mix of brawn, brains, and beauty. The look of the PH-1 draws onlookers in, and the performance locks them in. The second you hold the PH-1 in your hand you'll know you've purchased a quality product. This is a flagship smartphone, there's no doubt. What more needs to be said? Bravo Essential, you've created something special. Automatically sign up for TechRepublic's Mobile Enterprise Newsletter for more news and tips. Subscribe Also see
Building a slide deck, pitch, or presentation? Here are the big takeaways: An Uber vehicle in autonomous driving mode hit and killed a woman in Tempe, AZ, in the first known pedestrian fatality involving the self-driving technology. Uber has temporarily stopped its self-driving operations in Tempe and all other cities where it has been testing its vehicles, including Phoenix, Pittsburgh, San Francisco, and Toronto. An Uber car in autonomous driving mode struck and killed a pedestrian in Tempe, AZ, on Monday, in the first known pedestrian fatality involving the self-driving technology, as reported by our sister site CNET. Uber has since temporarily stopped its self-driving operations in Tempe and all other cities where it has been testing its vehicles, including Phoenix, Pittsburgh, San Francisco, and Toronto. The vehicle was in autonomous mode at the time of the accident, with a vehicle operator behind the wheel, according to a statement from the Tempe police. More about autonomous vehicles Special report: Tech and the future of transportation (free PDF) This ebook, based on the latest special feature from ZDNet and TechRepublic, looks at emerging autonomous transport technologies and how they will affect society and the future of business. Read more The female victim walking outside of the crosswalk crossed Curry Road in Tempe, and was struck by the Uber car, the police said in the statement. She was transported to a local hospital, where she passed away from her injuries. SEE: IT leader's guide to the future of autonomous vehicles (Tech Pro Research) The investigation is still active, and Uber is assisting, the police said in the statement. "Our hearts go out to the victim's family," an Uber spokeswoman said in a statement. "We are fully cooperating with local authorities in their investigation of this incident." "Some incredibly sad news out of Arizona," Uber CEO Dara Khosrowshahi tweeted on Monday. "We're thinking of the victim's family as we work with local law enforcement to understand what happened." This is the first known pedestrian fatality from a self-driving car. However, autonomous vehicles have been involved in a number of other accidents, including backing into a delivery truck in Las Vegas and getting hit by another car whose human driver did not yield in Tempe, both in 2017. In May 2016, a Tesla driver was killed in an accident while the car was operating in its semi-autonomous Autopilot mode. A US Department of Transportation investigation did not identify any defects in design or performance of the Autopilot system. The pedestrian fatality could have immediate implications for the rollout of self-driving taxis and delivery vehicles, which are predicted by many to be the first widespread applications of self-driving technology. It could mean that progress is slowed until more regulations are in place. The accident could also impact discussions around how autonomous vehicles will change auto insurance. KPMG estimates that the technology will lead to an 80% drop in accident frequency by 2040, and that providers will need to shift from covering the car itself to the software of the car. It remains to be seen how coverage of accidents like this may change. Stay informed, click here to subscribe to the TechRepublic Tech News You Can Use newsletter. Subscribe Also see
Building a slide deck, pitch, or presentation? Here are the big takeaways: IBM launched IBM Watson Data Kits, designed to speed the development of AI applications in the enterprise. Enterprise AI apps created with IBM Watson Data Kits have the potential to aid in faster, more informed decision making for business leaders. On Tuesday, IBM launched IBM Watson Data Kits, designed to speed the development of artificial intelligence (AI) applications in the enterprise. These apps have the potential to aid in faster, more informed decision making for business leaders, according to a press release. "Watson Data Kits will provide companies across industries with pre-enriched, machine readable, industry-specific data that can enable them to scale AI across their business," the release said. In Q2, Watson Data Kits will become available for the travel, transportation, and food industries, with kits for travel points of interest and food menus. Kits tailored for additional industries are also expected in the coming months, the release noted. SEE: The Power of IoT and Big Data (Tech Pro Research) More than half of data scientists said they spend most of their time on janitorial tasks, such as cleaning and organizing data, labeling data, and collecting data sets, according to a CrowdFlower report, making it difficult for business leaders to implement AI technology at scale. Streamlining and accelerating the development process for AI engineers and data scientists will help companies more quickly gain insights from their data, and drive greater business value, according to IBM. "Big data is fueling the cognitive era. However, businesses need the right data to truly drive innovation," Kristen Lauria, general manager of Watson media and content, said in the release. "IBM Watson Data Kits can help bridge that gap by providing the machine-readable, pre-trained data companies require to accelerate AI development and lead to a faster time to insight and value. Data is hard, but Watson can make it easier for stakeholders at every level, from CIOs to data scientists." The Watson Data Kit for travel points of interest will offer airlines, hotels, and online travel agencies with more than 300,000 points of interest in 100 categories, to create better experiences for travelers, according to the release. Companies in the travel and transportation industry can use the kits to build AI-powered web and mobile apps to help users find information on points of interest in a given area. For example, the release noted, a hospitality company could use the data kit to train AI powering its chatbot to recommend personalized destinations based on a customer's individual preferences. Meanwhile, the Watson Data Kit for food menus includes 700,000 menus in 21,000 US cities, according to the release. This will offer AI developers content for apps that can help users filter menu items, types of food, locations, and price points. The kit allows developers to build in side-by-side comparisons of menu choices and prices. For example, the release noted, the kit could be integrated into a car's navigation system to provide voice-activated directions to the closest bakery that sells gluten-free muffins. Stay up to date on all the latest big data news. Click here to subscribe to the TechRepublic Big Data Essentials newsletter. Subscribe Also see
The National Football League has 180 million fans worldwide. About 17 million of those trek out to stadiums each season—which means over 90% of NFL fans are catching the games on TV, online and mobile. That's why NFL games represented 37 of the top 50 highest-rated television broadcasts of 2017. A lot of the appeal of football is that it's not just about the long throws of quarterbacks, the bullish strength of defensive lineman, and the lightning-fast reflexes of wide receivers, for example. It's about the chess match between the coaches, and the preparation, instincts, and quick decision-making of the smartest players. But while these athletic feats are amazing to watch and easy to recognize, it's often a lot harder to pinpoint the strategies and the smarts that tip a game one way or the other. That's where the NFL's Next Gen Stats—a big partnership with Amazon Web Services—is changing how the game is understood, using a combination of cloud computing, big data analytics, and machine learning. "We've been turning a corner on creating metrics that are more advanced and do a better job of telling the story of the game," Matt Swensson, the NFL's vice president of emerging products and technology, told TechRepublic. SEE: Big data policy (Tech Pro Research) | Job description: Chief data officer (Tech Pro Research) | Job description: Data scientist (Tech Pro Research) The NFL has been keeping statistics since 1920. But most of the stats that it displays to the public had been pretty standard for the past several decades. It was the kind of stuff you see on trading cards and game programs—yards passing, yards rushing, catches, tackles, quarterback sacks, interceptions, etc. But in 2015, it began putting a pair of RFID tags from Zebra Technologies on the shoulder pads of every NFL player in order to track speed, field location, and movement patterns. Now, it also has sensors on the referees, first down markers, and end zone pylons. How to fully take advantage of all this data and convert it into value for the NFL and its customers was the big challenge. When Amazon learned that the NFL now had all this player telemetry data, the AWS team suggested that they could help create more value with analytics—similar to what AWS had famously done with Major League Baseball Advanced Media. "We started working with [the NFL] to help them apply machine learning to that data," AWS vice president of marketing Ariel Kelman told TechRepublic. "They're recording things like when the ball was snapped, what the formation was, how many of which type of player was on the field, what the result of the play was. A lot of that is pattern recognition... The idea is there's a whole bunch of things that require manual detection and tagging that they want to be able to automate." So AWS and the NFL drew up a partnership where the NFL used the Amazon cloud, its advanced analytics tools and the new SageMaker machine learning product—while Amazon got to slap the AWS logo on Next Gen Stats as the official sponsor and get a bunch of promotional opportunities that show off what its big data tools can do. The deal kicked off about six months ago, before the start of the 2017 football season, and culminates on Sunday in Super Bowl LII—although both the AWS and NFL folks were even more excited about what they're going to be able to do with the data next year. Here are the three main ways it's changing the game: 1. The impact of Next Gen Stats on NFL teams One week of NFL games now creates 3TB of data, NFL CIO Michelle McKenna-Doyle said in her presentation at Amazon's re:Invent conference last November. After each game, the league now exports a trove of data to each team to help them evaluate their overall performance and their players. The league provides some basic tools to help the teams evaluate the data along with a few basic insights. Some teams have their own data scientists or analytics partners to take it further. The teams are using the data to help inform their training, fitness, and game preparation. But there's one big caveat that's keeping them from using the data to plan game strategies and draw up plays. "Right now clubs are getting just their side of the ball, and so that's a decision point that's coming up," said the NFL's Swensson. "We want to be able to ultimately get to a place where both sides of the ball are available to clubs, so they can do a lot more interesting analysis." In other words, teams don't get their opponents' in-depth data or the patterns that machine learning can see. That's a big topic for the NFL in the upcoming off-season, and it's an issue that's up for consideration by the NFL Competition Committee. Image: NFL 2. The impact of Next Gen Stats on NFL broadcasts The place where Next Gen Stats has made its most visible impact is on the television broadcasts of NFL games by CBS Sports (both TechRepublic and CBS Sports are owned by CBS). AWS has brought the data visualizations of Next Gen Stats into CBS broadcasts and given CBS analysts data points to explain some of the most important plays in the game. AWS is also working with the NFL's other broadcast partners to bring similar capabilities next season. Some of the Next Gen Stats that analysts now have access to include, for example: Real-time location data on all of the players Player speed and acceleration Total running distance for each player for the entire game The amount of separation that receivers get from their defenders The pressure rate that defenses have on quarterbacks Percentage of quarterback throws into tight windows Announcers such as former Dallas Cowboys quarterback Tony Romo—a new color commentator at CBS this season—have embraced the data and used it to help give viewers an inside look at why some of the plays on the field succeed and others don't. "We're working with the guys in production at CBS Sports to try and evolve it to really make the fan experience better. It's early days. What we've learned from baseball is the way to present this data," said Amazon's Kelman. "We're looking forward to taking it to the next level next season." 3. The impact of Next Gen Stats on NFL fans For fans, the NFL has launched nextgenstats.nfl.com as a portal to view these new insights and data points. There are all kinds of new statistics that you've never seen on the back of a trading, such as: Average Time to Throw (quarterbacks) Average Completed Air Yards (quarterbacks) Aggressiveness Percentage (quarterbacks) Efficiency (running backs) 8+ Defenders in the Box (running backs) Average Time Behind Line of Scrimmage (running backs) Average Cushion (receivers) Average Separation (receivers) Average Targeted Air Yards (receivers) The site also includes charts for quarterbacks, running backs, and receivers to see their patterns from their last game. In addition, the NFL publishes photo essays with specific insights from Next Gen Stats from the previous week's games, as well as videos that explain the differences and similarities between players, teams, and games based on the data. "There's some very complicated parts of football that can be really fascinating to die hard fans," said Swensson. "A lot of times you watch a game and maybe you don't realize some of the decisions and why they are made, or even some of the intricacies of the game such as why players line up a certain way. My hope is that [Next Gen Stats] continues to educate fans and help them understand more and more of our game." SEE: Turning Big Data into Business Insights (ZDNet special report) | Download the report as a PDF (TechRepublic) This stuff is obviously great source material for fantasy football junkies, but it can also fuel die-hard fans in their search to better understand the performance of their team and their favorite players—which can also create greater customer loyalty for the NFL. The good news for fans is that the program is just getting off the ground. "The stuff you see on the site now is just based off the tracking data and the splits we've been able to do based on location data, but not much pattern recognition," said Swensson. "A lot of the machine learning stuff we've done, we haven't put up yet. Our plan is to launch that for next season." Of course, there's one more big game left this season. For the fans watching Super Bowl LII between the New England Patriots and the Philadelphia Eagles on Sunday, here are a pair of Next Gen Stats videos that break down what's likely to be the game's key matchup: Image: NFL What other businesses can learn "The typical conversation that we're having with customers around machine learning is that it is one of the top priorities," said Kelman. "But, there is a huge gap in most of these companies between what they want to do and the skills of their people. It's kind of as simple as we have all this data, what should we do with machine learning? What problems should we point it at, and what kind of predictions should we make? The more examples that we can give our customers of what other people are doing, the better." Subscribe to TechRepublic's Big Data newsletter to keep up with the latest tips and best practices. Subscribe Also see
A new malware campaign is making its way into businesses through a malicious PowerPoint email attachment, Trend Micro research has found. According to blog post, CVE-2017-0199 traditionally utilizes RTF documents, and this is the first time it has been seen to abuse PowerPoint Slide Show in the wild. The malware comes in an email that appears to be from a company that manufactures cables. The email tells the recipient to see the order and asks for them to quote cost, insurance, and freight (CIF) and free on board (FOB) prices as well. Due to the targeted nature of the email, as the criminals are typically going after electronics companies, the post said, the email is being considered a spear phishing attack. In the sample email provided by Trend Micro, the attachment is titled PO-483848.ppsx. In that case, PO could be short for purchase order, in an effort to increase the perceived legitimacy of the PowerPoint file. SEE: 10 ways to minimize fileless malware infections If the victim opens the attached file, there will be no purchase order, not even any fake text attempting to be one. It simply reads: CVE-2017-8570. That's the name of another Microsoft vulnerability, the post said, but not the one that this particular malware is targeting. PowerPoint then initializes a script moniker and runs the malicious payload, the post said. If successful, it will download an XML file from the internet. Some JavaScript code in that XML file runs a PowerShell command that downloads and executes a remote access tool. At this point, the attackers will be able to run remote commands on the victim's machine. "The tool's capabilities are quite comprehensive, and includes a download & execute command, a keylogger, a screen logger, and recorders for both webcam and microphone," the post said. The biggest issue for this given attack is the fact that it comes by way of a PowerPoint file. Most current detection methods focus on the RTF delivery method, so that means attackers utilizing the PPSX files could have an easier time avoiding antivirus detection, the post said. To protect against attacks like this one, businesses should make sure that their systems are properly patched and updated to account for any known vulnerabilities. Also, users should be regularly educated on proper security hygiene and email etiquette. The 3 big takeaways for TechRepublic readers A new spear phishing campaign is using PowerPoint files to exploit the CVE-2017-0199 and deliver malware to victims. If a user clicks on the attached file, it will run a remote access tool, and could allow attackers access to a user's keystrokes, screen, webcam, and microphone. IT should keep systems update and educate users on the proper behavior regarding attachments and emails from outside parties. Stay informed, click here to subscribe to the TechRepublic Cybersecurity Insider newsletter. Subscribe Also see
Many industries have adopted blockchain technology as a core part of their operations. It creates transparency and trust between parties so that trust is no longer needed, which is why industries such as real estate, finance, and advertising are beginning to use it. And now, Wikipedia's cofounder Larry Sanger wants blockchain to replace the free online encyclopedia. TechRepublic's Dan Patterson met with Sanger to discuss why he joined Everipedia, and why the blockchain should replace Wikipedia. Everipediais the encyclopedia of everything, where topics are unrestricted, unlike on Wikipedia, Sanger said. More technically, it's a blockchain encyclopedia with a decentralized protocol for accessing and sharing knowledge. SEE: Cheat sheet: Blockchain A blockchain is a list of transactions, or a ledger, that can be used to represent a database. By putting all of Everipedia's content on the blockchain, it creates a transparency between writers and readers. When it's all set up, it will be possible for people to propose adding information to the blockchain, Sanger explained, and people who have tokens (which are earned by adding to the blockchain) come to a consensus about what information gets added to the blockchain. "In terms of editorial standards, just to get on an encyclopedia blockchain will be relatively easy," Sanger said. When the initial protocol is adopted, it should be a very low bar that allows encyclopedia articles to get onto the blockchain, he said. However, the actual editorial decisions that are made won't happen on that level. Different users of the information that make up the blockchain and Everipedia will make the decisions on the ratings of articles, and what order they will be placed. Users will be able to go in, read those articles, and visit our interface to submit their own ratings, he said. Find out The Next Big Thing and subscribe to TechRepublic's newsletter. Subscribe Also see
Most of us use macros to automate processes that we repeat or that require specialized knowledge. Regardless of why you use macros, you want them to run as quickly as possible. You can optimize your code by: Disabling features that update the sheet Avoiding selecting things In this article, I'll show you how to make simple changes to your code to optimize it for speed. I'm using Excel 2016 on a Windows 10 64-bit system, but these tips will work in older versions. The tips are specific to the desktop version because macros don't run in the browser version. There's no demonstration file; you won't need one. 1: Disable updating features Have you noticed that your screen sometimes flickers while a macro is running? This happens when Excel attempts to redraw the screen to show changes made by the running macro. If screen updates aren't necessary while running the macro, consider disabling this feature so your macro can run a bit faster. Use the following statements to disable and enable this feature: Application.ScreenUpdating = False 'macro code Application.ScreenUpdating = True You can expect Excel to redraw the screen when the macro completes its work—when you reset the property to True. Disabling screen updates won't disable the Status Bar, which displays information during normal operations, including what your macro is doing. To disable updates to the Status Bar, use the DisplayStatusBar property as follows: Application.ScreenUpdating = False Application.DisplayStatusBar = False 'macro code Application. DisplayStatusBar = True Application.ScreenUpdating = True If your macro is analyzing a lot of data, consider setting the Calculation property to Manual while the macro is running. That way, the workbook won't recalculate unless you force it to by pressing F9. Calculation speed probably isn't a large performance factor is most normal workbooks though, and it can have unexpected results, so use it sparingly—as needed: Application.Calculation = xlCalculationManual Application.ScreenUpdating = False Application.DisplayStatusBar = False 'macro code Application. DisplayStatusBar = True Application.ScreenUpdating = True Application.Calculation = xlCalculationAutomatic Macros can trigger unnecessary event procedures. For instance, entering a value into a cell triggers the Worksheet_Change event. A few won't be noticeable, but if the macro is complex enough, you might consider disabling events while the macro is running: Application.Calculation = xlCalculationManual Application.ScreenUpdating = False Application.DisplayStatusBar = False Application.EnableEvents = False 'macro code Application.EnableEvents = True Application. DisplayStatusBar = True Application.ScreenUpdating = True Application.Calculation = xlCalculationAutomatic Similar to setting the Calculation property to Manual, disabling events can have unexpected results, so use it with careful consideration. 2: Don't select things If you use the macro recorder, you may have noticed that it's fond of using the Select method to explicitly reference things. It works, but it's slow and prone to runtime errors. If you want to start with the recorder, do so. Then, review the resulting code for Select methods and change them to Range references. For example, the following recorder code applies italics to C4:C62: Sub Macro1() Range("C4:C62").Select Selection.Font.Italic = True End Sub The recorder uses the Select method to identify the range. Once you know the right methods and properties—Font.Italic = True—you can easily rewrite the macro as follows: Sub Macro2() Range("C4:C62").Font.Italic = True ' Sheets("Divisions").Range("C4:C62").Font.Italic = True ' Range("Table3[Species]").Font.Italic = True End Sub Macro2() accomplishes the same thing with one line of code and without selecting the range. In short, you simply combine the two statements and delete the Select method and the Selection object. The optimized code is more efficient and less prone to runtime errors. The commented lines show the Sheet and Table object references. The sheet reference is necessary only if you want to run the macro outside of the sheet (Divisions, in this case). The Table references the Species column in a Table named Table3. To learn more about efficient selection methods when using VBA, read Excel tips: How to select cells and ranges efficiently using VBA. Similar to selecting ranges and objects to perform an action in the sheet, an explicit reference to the sheet also slows down processing. The solution is to use variables. For example, the following code references the same cell (value) six times: Function ReturnFeeSlow() Select Case Range("I4") Case 1 ReturnFee = Range("I4") * 10 Case 2 ReturnFee = Range("I4") * 20 Case 3 ReturnFee = Range("I4") * 30 Case 4 ReturnFee = Range("I4") * 40 Case 5 ReturnFee = Range("I4") * 50 End Select MsgBox ReturnFee, vbOKOnly End Function At the very least, ReturnFeeSlow() makes two explicit references to I4. It's not changing the value, it's using the value in a simple expression. In this case, it's more efficient to define a variable with the value in I4 and use the variable, as follows: Function ReturnFeeFast() Dim intFee As Integer intFee = Range("I4").Value Select Case intFee Case 1 ReturnFee = intFee * 10 Case 2 ReturnFee = intFee * 20 Case 3 ReturnFee = intFee * 30 Case 4 ReturnFee = intFee * 40 Case 5 ReturnFee = intFee * 50 End Select MsgBox ReturnFee, vbOKOnly End Function Faster is better Please forgive the obnoxiously contrived examples, but the concept is the point, not the code's purpose. Specifically, built-in updating features and explicit references to the sheet or a range will slow down your code. Admittedly, with today's fast systems, simple macros won't always need optimization. However, if you're working with a complex custom application, these easy-to-implement changes should improve efficiency. Get more great Office tips and tricks delivered to your inbox. Sign up for TechRepublic's Microsoft Weekly newsletter. Subscribe Send me your question about Office I answer readers' questions when I can, but there's no guarantee. Don't send files unless requested; initial requests for help that arrive with attached files will be deleted unread. You can send screenshots of your data to help clarify your question. When contacting me, be as specific as possible. For example, "Please troubleshoot my workbook and fix what's wrong" probably won't get a response, but "Can you tell me why this formula isn't returning the expected results?" might. Please mention the app and version that you're using. I'm not reimbursed by TechRepublic for my time or expertise when helping readers, nor do I ask for a fee from readers I help. You can contact me at [email protected]. Also read...
According to Codecademy, 2017 might have been the year of re-education. New research from the online coding resource said more than half of its users this year hold a college degree, using their free time to boost their skill set by learning to code. About 55% of Codecademy's users reported having some kind of college degree, showing that tech professionals are finding ways to expand their resume and stay relevant in a constantly changing field. "We're hearing increasingly from people learning to code to get a leg up in their current industry and from people who want to move into tech adjacent fields," the report said. "Considering that this is one of the few sectors of the economy that's growing, it makes sense." SEE: Hiring kit: Python developer (Tech Pro Research) In a survey of part of the site's 45 million users, about 40% said they were learning coding skills to enter software development or a similar position. Others said coding was empowering and enabled them to work from home, the report said. With multiple reports of a tech skills gap, tech professionals may continue to see online courses, like those at Codecademy, as a way to keep up with new programming languages and refresh pre-existing skills. Online courses may also help those without a degree build the skills employers are looking for. About half of the respondents said they had never taken an university coding course. Of those who had taken an in-person college coding class, 25% made the switch to online courses because they felt they were a safer space to learn a new skill. The report is coming from users who have already begun learning to code online, but the findings may signal a shift to more online courses as a way to boost tech resumes. Around 10% of respondents said they felt happy when learning in a traditional university setting, the report said. Overall, only 5% of respondents said they were anxious in such a setting, but women were 2.5x more likely to say they were anxious. Online courses may offer women and minorities a chance to learn tech skills without feeling intimidated by typically male-dominated university computer science programs. The report found that, while women may feel more empowered in online courses, men are more likely to see a pay raise or promotion due to learning how to code. Men are almost 55% more likely than women to say they made more money due to their new coding skills. Men are 1.5x more likely than women to receive a promotion due to the skills, the survey found. "Dozens of programs have sprung up to help women move into careers in tech, but it seems that even when women take all the right steps, they're not seeing the reward," the report said of the findings. Despite women-targeted programs and adjusted entry-level computer science courses, more may need to be done to fight gender disparity in the tech industry. Want to use these statistics in your next presentation? Feel free to copy and paste these takeaways: 55% of Codecademy users hold a college degree, but are still learning how to code via the online course platform. -Codecademy, 2017. 40% said they were learning new coding skills to move into a software development job or similar position. -Codecademy, 2017. Men are 55% more likely than women to make more money due to learning how to code. -Codecademy, 2017. Stay informed, click here to subscribe to the TechRepublic Tech News You Can Use newsletter. Subscribe Also see
Google has posted a defense of Gmail's privacy protections after a Wall Street Journal report found the service was allowing third-party companies to read personal emails. The WSJ reported that employees at firms offering personalized services, such as shopping and travel suggestions, are accessing and reading Gmail users' messages. While not referencing the story directly, Google Cloud's director of security, trust and privacy, Suzanne Frey, published a post in the wake of the report, in which she outlined Gmail's privacy protections. "We continuously work to vet developers and their apps that integrate with Gmail before we open them for general access, and we give both enterprise admins and individual consumers transparency and control over how their data is used," she wrote. SEE: GDPR security pack: Policies to protect data and achieve compliance (Tech Pro Research) Before a third-party app can access Gmail messages, Frey says the software is submitted to "a multi-step review process that includes automated and manual review of the developer, assessment of the app's privacy policy and homepage to ensure it is a legitimate app, and in-app testing to ensure the app works as it says it does". A key part of this review is ensuring that apps only collect data they need and don't misrepresent how they are using this data, according to Frey. How to keep your Gmail secure Third-party apps need to have been given explicit permission by the user before those apps can access personal data, Frey said, adding that these permissions can be revoked using the Security Checkup page in the user's Google account. Those concerned about third-party access to their Gmail account can also visit myaccount.google.com and select the Apps with account access page, from which they can revoke any previously-granted permissions. Image: Google Business users enjoy a wider range of protections, with G Suite admins able to screen connected OAuth apps to limit the data access that individual users are able to grant. Google ceased scanning consumer Gmail messages to personalize ads to users in June last year, a point that Frey stressed in her post yesterday. "We do not process email content to serve ads, and we are not compensated by developers for API access. Gmail's primary business model is to sell our paid email service to organizations as a part of G Suite." Public awareness of privacy issues has been heightened recently, following the Cambridge Analytica scandal, in which the data firm was accused of using the personal information of millions of Facebook users to try to change election results. Despite Google's assurances, David Emm, principal security researcher at Kaspersky Lab, says the WSJ's findings show how important it is for individuals and businesses to pay close attention to the permissions they give third-party apps. "We have a right to privacy - but we need to be aware of what terms and conditions we are agreeing to when signing up for free email and social-media accounts, especially regarding the rights we are waiving or the access to data that we are giving away," he said. "We should also think twice before allowing third-party apps to connect to our accounts." The big takeaways for tech leaders: G Suite admins can screen connected OAuth apps to limit the data access that individual users are able to grant. Those concerned about third-party access to their Gmail account can visit myaccount.google.com and select the Apps with account access page, from which they can revoke any previously-granted permissions. Stay informed, click here to subscribe to the TechRepublic Tech News You Can Use newsletter. Subscribe Also see
Building a slide deck, pitch, or presentation? Here are the big takeaways: Overall, iPhone X users gave the product a 97% customer satisfaction rating. — Creative Strategies, 2018 Despite overall customer satisfaction with the iPhone X, device owners had problems with Siri, leaving the digital assistant with a roughly 20% customer satisfaction rating. — Creative Strategies, 2018 For years Apple has responded to complaints about the functionality of Siri and as more virtual assistants have popped up from their rivals, users continue to grumble about the things it cannot do. That was one of the biggest takeaways from a study of iPhone X users conducted by Creative Strategies, Inc. Ben Bajarin, principal analyst and the head of primary research, said in the report that iPhone X owners gave the product "an overall 97% customer satisfaction. While that number is impressive, what really stands out when you do customer satisfaction studies is the percentage who say they are very satisfied with the product," Bajarin wrote. In terms of the survey respondents who met that "very satisfied" mark, the report found it to be about 85% of iPhone X owners. SEE: IT leader's guide to the future of artificial intelligence (Tech Pro Research) But when the report authors broke it down by specific features, Siri stood out as one of the only things users were not happy with. Every other feature had a customer satisfaction percentage above 60%. Siri was the only feature below that mark at about 20%. This figure is more prominent, Bajarin said, because their survey focused on early Apple adoptees, who he said "tend to be more critical and less satisfied overall than mainstream consumers." This is good for Apple because of the very high marks almost every other feature received in the survey. But Siri's very low score dovetails with the years of complaints users have had with how Siri functions. The Street's Leon Lazaroff wrote that Siri's main problem is a consumer base expecting it to function like other virtual assistants, which it cannot do because it was designed for a very specific purpose. "Siri's job is to integrate those devices, it's meant to grease the connections between Apple devices, making the iPhone integral to the iPad, AppleWatch and AppleTV — and all points in between," Lazaroff wrote. "The problem for Apple is that people have come to expect a voice-activated device that can answer relatively easy questions fast and efficiently, and Siri...has mostly fallen short." Siri's inability to answer basic questions like Amazon's Alexa, Microsoft's Cortana, and Google Assistant has left users confused about what the feature is actually supposed to do. Verge journalist Walt Mossberg wrote in 2016 that Apple "wasted its lead" with Siri and was too slow to add features and functionality that its rivals had already mastered. "Siri's huge promise has been shrunk to just making voice calls and sending messages to contacts, and maybe getting the weather, using voice commands. If you try and treat Siri like a truly intelligent assistant, aware of the wider world, it often fails, even though Apple presentations and its Siri website suggest otherwise," Mossberg wrote. A study done by Stone Temple last year found that Siri "only answered 21.7 percent of questions and nailed 62.2 percent of them completely, correctly," noting that "Alexa and Siri both face the limitation of not being able to leverage a full crawl of the web to supplement their knowledge bases. It will be interesting to see how they both address that challenge." Despite the challenges with Siri, Apple should be heartened to know that most users gave the iPhone X very high scores on almost everything else, and Bajarin said Apple is set up nicely for the future. "Overall, the data we collected around iPhone X show that if Apple is truly using this product as the baseline for innovation for the next decade, then they are off to a strong start and have built a solid foundation," Bajarin wrote in the report. Bajarin later added: "If Apple can bring Siri back to a leadership position and in combination continue to build on the hardware and software around iPhone X base foundation, then they will remain well positioned for the next decade." Stay informed, click here to subscribe to the TechRepublic Apple Weekly newsletter. Subscribe Also see
Building a slide deck, pitch, or presentation? Here are the big takeaways: Investing in artificial intelligence (AI) and human-machine collaboration could boost business revenues by 38% by 2022, and raise employment levels by 10%. — Accenture, 2018 61% of senior executives think the share of roles requiring collaboration with AI will rise in the next three years. — Accenture, 2018 Artificial intelligence (AI) is poised to impact nearly every industry, and businesses that don't take immediate steps to upskill their workforces to collaborate with machines will miss out on revenue, according to a recent report from Accenture Strategy. If businesses invest in AI and human-machine collaboration at the same rate as top-performing companies, they could boost revenues by 38% by 2022, the report found, lifting profits by $4.8 trillion globally. These businesses could also raise employment levels by 10% in that timeframe. Business leaders are optimistic about the changes that AI can bring to their organization and workforce, according to the 1,200 senior executives surveyed for the report: 72% said that intelligent technology will be critical to their organization's market differentiation. Further, 61% said the share of roles requiring collaboration with AI will rise in the next three years. And 69% of the 14,000 workers surveyed said that it was important to develop skills to work with these intelligent systems. SEE: IT leader's guide to the future of artificial intelligence (Tech Pro Research) However, a disconnect remains, as only 3% of business leaders said that their organization plans to significantly increase its investment in reskilling workers in the next three years. "To achieve higher rates of growth in the age of AI, companies need to invest more in equipping their people to work with machines in new ways," Mark Knickrehm, group chief executive of Accenture Strategy, said in a press release. "Increasingly, businesses will be judged on their commitment to what we call Applied Intelligence - the ability to rapidly implement intelligent technology and human ingenuity across all parts of their core business to secure this growth." While many fear that AI will replace low-level jobs, most businesses are optimistic about the impact on their companies, the report found: 63% of senior executives said they think their company will create net job gains in the next three years due to AI, while 62% of workers said they believe AI will have a positive impact on their work. Here are three ways business leaders can shape their future workforce in the age of AI, according to Accenture: 1. Reimagine work by configuring work from the bottom up Some 46% of business leaders agreed that job descriptions are already obsolete, and 29% said they have redesigned jobs extensively. Leaders should assess tasks rather than jobs, and then allocate those tasks to both machines and people. This balance the need to automate work and to elevate your worker's capabilities. 2. Pivot the workforce to areas that unlock new forms of value Leaders should go beyond process efficiencies to prepare their workforce to create new customer experiences, Accenture recommends. This might mean finding new growth models by reinvesting savings gained from automation into the future workforce. It also requires a new leadership mindset that values long-term transformation opportunities. 3. Scale up "new skilling" Determine your workforce's current skillset, and their willingness to learn how to work with AI. You can then use digital platforms to target different areas of the workforce with personalized learning opportunities. "Business leaders must take immediate steps to pivot their workforce to enter an entirely new world where human ingenuity meets intelligent technology to unlock new forms of growth," Ellyn Shook, chief leadership and human resources officer at Accenture, said in the release. "Workers are impatient to collaborate with AI, giving leaders the opportunity to demonstrate true Applied Intelligence within their organization." Keep up to date on all of the newest tech trends. Click here to subscribe to the TechRepublic Next Big Thing newsletter. Subscribe Also see
According to software rating and review site Capterra, there are over 600 project management solutions. However, not all PM software will work in every industry, company, or project size and type. Here are three steps to help your company select the right software to suit your project needs. 1. Evaluate your internal environment. Gather all the details about your business and how it operates. Factor in company size, hierarchy, how departments and units are structured, and how they interact. Also, analyze products or services, culture, and the available internal and outsourced talent. Some other factors to consider include internal policies, technology, internal views, project methodologies, long-term goals, and finances. These have the potential to create alignment issues with the way your business operates or manage projects. Before choosing PM software, evaluate the strengths and weaknesses of your business. This in-depth analysis may slow the selection process down, and take time and effort, but it is an essential exercise that will help all stakeholders avoid disappointment, wasted time, and potentially unnecessary costs. Having a big-picture view can reduce the risk of selecting a solution that does not align well with the long-term strategy or the unique inner workings of your business. SEE: Managing vendor relationships: Time commitment, benefits, and pain points (Tech Pro Research) 2. Identify projects and confirm PM software will sufficiently support all aspects. Work with business leaders to identify and document high-level details about upcoming projects, both short and long-term. Record as much detail as possible; this information will be required when sending requests for information (RFIs), requests for proposals(RFPs), or requests for quotes (RFQs) to software vendors. To ensure the a particular solution can meet your short and long-term goals, pay close attention to the details provided to make sure project requirements are properly and fully addressed in vendor documents. Look for features that work for your specific project needs. Just because a vendor offers more features than other vendors does not make it the best choice for your company's projects. Make sure the solution can accommodate your key processes and methodologies and that it is scalable and customizable. Also take the time to make sure the vendor can complete onboarding within your company's budget and schedule. SEE: How to build a successful project manager career (free PDF) (TechRepublic) 3. Do a trial run. Whenever possible, take the opportunity to do a trial run of the fully-working version of the software your company intends to implement. Again, involve key participants in this step to sufficiently test all the required features to make sure each meets your project needs. This may require people from many different areas of the company: make sure frontline users are okay with the system, as well as IT specialists. This is the final gateway before onboarding and reduces the costly buyer's remorse that companies encounter all too often. Evaluating your internal business environment, identifying potential short and long-term projects, confirming sufficient support, and doing a trial run of software can help your company select the right solution to suit your project needs. For more project management and business leadership advice, subscribe to our Executive Briefing newsletter. Subscribe Also see:
Medium and large enterprises are set to double their usage of machine learning by the end of 2018, according to a new report from professional services firm Deloitte. The number of machine learning pilots and implementations will double by the end of 2018, and then will double from that number by the end of 2020, Deloitte's report predicts. Businesses spent $17 billion on the technology in 2017, and that is expected to increase to $57.6 billion by 2021. Deloitte identified five factors that have held back machine learning growth: Too few practitioners, high costs, tools are too young, confusing models, and business regulations. SEE: Quick glossary: Artificial intelligence (Tech Pro Research) The estimated growth shows how as the technology evolves, buy-in from businesses may increase, whether they're using it to improve workflows or create new products. If businesses opt to not adopt the technology, they may risk falling behind others. The report looked at other emerging technologies, including augmented reality (AR). One billion smartphones will create AR content at least once in 2018, the report found, with 300 million using AR at least monthly. In what used to just mean an animated face filter on Snapchat or Instagram, AR will expand to other uses for mobile devices. Last week, Apple named AR as one of its breakout app trends of 2017. Deloitte predicts direct revenues from AR on smartphones will hit $1 billion by 2020, growing tenfold from 2018. With the trend continuing to grow, developers and brands may need to find ways to integrate AR into an app to stay up-to-date and entertain users. While Deloitte expects less than $100 million in discrete app revenues for AR content globally in 2018, it could drive sales in other ways. The ability to simply host AR content may be a key difference to consumers looking to switch or upgrade devices. In good news for business travelers, the Deloitte report also found in-flight connectivity (IFC) will be up 20% in 2018. One-quarter of all airline passengers will fly on a plane with access to the internet. Business travelers will have a higher chance they will be able to connect in the air, enabling them to get more work done in transit. The 3 big takeaways for TechRepublic readers Businesses will double their usage of machine learning by the end of 2018, a new Deloitte report found. So far, the technology has been held back by regulations, evolving tools, high costs, and few practitioners. The projected growth, expected to hit $57.6 billion in spending by 2021, shows how it is being increasingly utilized as a business tool. Those not currently planning on using machine learning in their business may need to reconsider, or risk being left behind. Stay informed, click here to subscribe to the TechRepublic Tech News You Can Use newsletter. Subscribe Also see

No dataset card yet

New: Create and edit this dataset card directly on the website!

Contribute a Dataset Card
Downloads last month
135
Add dataset card