phl168co,Jilibay games real money
https://www.academytrans.com/category/technology/
Shining brightest where it’s darkThu, 31 Oct 2024 22:54:08 +0000en-US
hourly
1 https://wordpress.org/?v=6.6.2https://www.academytrans.com/wp-content/uploads/2022/11/cropped-Kentucky-Lantern-Icon-32x32.pngTechnology Archives • Kentucky Lantern
https://www.academytrans.com/category/technology/
3232How social media is influencing our interactions with public lands
https://www.academytrans.com/2024/10/31/how-social-media-is-influencing-our-interactions-with-public-lands/
https://www.academytrans.com/2024/10/31/how-social-media-is-influencing-our-interactions-with-public-lands/#respond[email protected] (Paige Gross)Thu, 31 Oct 2024 09:30:19 +0000https://www.academytrans.com/?p=23744
Alice Ford, an outdoor content creator and show host, has spent a decade centering conservationist education on her YouTube channel. She said the glut of nature photos and short videos on social media is resulting in “just people wanting to see a place more than they have respect for the place.” (Photo courtesy of Alice Ford)
Don’t pet the fluffy cows.
That’s the Instagram bio tagline for the National Park Service’s popular account, which showcases stunning photos of the diverse terrains of the United States’ 431 national parks.
The cheeky statement, followed by a buffalo emoji, is meant to make its 6 million followers laugh, NPS’ social media specialist Matthew Turner says, but it’s also a very real warning.
“We want you to be really prepared to stay this distance, and be aware of your surroundings at all times,” Turner said. “And to know that if you don’t, there are consequences where you can get hurt.”
Technology and the rise of social media has driven new people to visit public parks and lands, as the platforms make it easier to showcase the great outdoors. But outdoor enthusiasts and environmental conservationists say social media has also contributed to “selfie tourism” or the influx of visitation to specific landmarks that go viral on social media.
It also can describe the behavior of those that crowd a landmark or ignore safety protocol to get the perfect shot.
Every year, there are incidents of people having such dangerous interactions with wildlife, or getting lost in the parks, or even losing their lives. It’s hard to quantify how exactly social media influences the decision making or behavior of park visitors, but several nearly-fatal and fatal incidents have been connected to attempting to capture content.
In 2018, a 29- and 30-year-old couple fell to their deaths in Yosemite National Park in California while attempting to take a photo at Taft Point. Several people have been attacked by bison in Yellowstone National Park over the last three years — at least one was a tourist trying to touch a bison while recording with her phone.
“A selfie in and of itself can inspire others. Maybe you see a friend post from a great trip, and it inspires you to go,” Phillip Kilbridge, CEO of NatureBridge said. “But you better do it thoughtfully. You better realize that when there’s a fence, it’s because there’s loose rock on the other side, or there’s a steep fall, or so many other unintended consequences.”
‘Loving our parks to death’
Kilbridge runs NatureBridge, an organization that teaches young people how to explore the outdoors without technology. The organization was initially founded with the intention of exploring parks in their off-peak seasons, and teaching kids to learn more about themselves and the environment with low barriers to entry on cost and prior education.
The parks have seen a surge in visitors in the last few years, crossing more than 300 million visitors nearly each year since their centennial celebration in 2016.
NatureBridge has brought more than 1 million kids through the program over its tenure and operates in Golden Gate National Recreation Area?in California, Olympic National Park in Washington, Prince William Forest National Park in Virginia and Yosemite. It makes a conscious effort to explore areas and trails that are outside the most popular ones, but high visitorship is putting strain on the hotels and areas surrounding the parks, and as a result, it’s more expensive to operate the program.
The social media effect on certain areas of the parks might be evident in some data from Yosemite National Park. Many drive in, take pictures at the iconic Half Dome and El Capitan rock formations, and then they head out, Kilbridge said. The focus is on “documenting the visit and putting it on their checklist or bucket list, to prove that they’ve done it.”
“You’ve probably heard the phrase, ‘we’re loving our parks to death,’” Kilbridge said. “But the truth is, we’re loving certain parts of certain parks to death.”
Cynthia Hernandez, the National Parks System’s public affairs specialist, said the agency uses social media to show examples of good environmental stewardship. Staff love and encourage new visitors to the parks, but they want them to be educated on preserving the trails, picking up trash, and learning the history and culture of where they’re visiting.
“We ask visitors to be adaptable and to listen to the park rangers,” Hernandez said. “You know, if the parking lot is full, don’t drive wherever. We like to say, ‘what is your plan B?’”
New Hampshire’s public and private lands are feeling the impact from some not-so respectful visitors this year, as its peak fall foliage season — a few-week stretch in late September and October — is bringing an estimated 3.7 million visitors this fall, the Washington Post reported. New Hamshirians, and their neighbors in Vermont, are dealing with clogged roads, crowded hiking trails, trespassing on private property and trash left behind by their visitors, many of whom are doing so in the pursuit of the perfect fall photo.
Some towns have closed roads to non-local traffic, while others have had to pay for extra patrols during on routes leading to lookouts or popular spots. One group of neighbors in Pomfret, Vermont, has raised $22,500 in a GoFundMe to “save” their road from the surge of influencers, with the funds planned to go toward temporary closures and increased signage, the Post reported.
Wesley Littlefield is a Salt Lake City-based marketing manager and outdoor content creator, and the effects Kilbridge described and New Englanders are experiencing are some of the many reasons he’s become mindful of not overexposing certain locations. Littlefield has been posting on social media and making YouTube videos about fishing, kayaking and other outdoor adventures for a few years, and focuses on educating people on ‘leave no trace’ principles.
He loves exploring the Southwest, but some of his favorite trails and natural wonders have become overpopulated after gaining attention on social media. Horseshoe Bend in Arizona is a prime example, he said, as is Antelope Canyon, which sits on Navajo land.
“What was once a peaceful overlook is now packed with people looking to snap that perfect shot, often at the expense of the environment around them,” he said of Horseshoe Bend. “You’ll notice things like litter, soil erosion and even permanent damage to local ecosystems. In extreme cases, wildlife habitats can be disrupted or destroyed, which takes away from the natural beauty and balance of these areas.”
Responsible exploration?
Littlefield said he loves that technology has allowed people to discover new places and share experiences. But carelessness in certain areas has made him more conservative with geotagging certain areas or “fragile” locations. It’s his way of protecting them while still sharing his love for the outdoors, he said.
“We want these places to remain as beautiful and untouched as possible for future visitors,” Littlefield said.
Alice Ford is another content creator who is sharing her outdoor adventures online as a way to educate others about conservation. She hosts a show on PBS called “Alice’s Adventures on Earth,” has a master’s degree in environmental management and has been making Youtube videos showcasing outdoor traveling, hiking and sustainable living for about a decade.
Her bread and butter is in longer-form content where she gets to place the focus on education.
“I think there’s an issue with these three-to-10 second videos showcasing a place,” Ford said. “Where you’re just seeing the most beautiful part, and you’re not learning anything about it, and you’re then not doing any research. And you’re just showing up because you want to get the exact same shot.”
When Ford travels, she’s looking for those less-busy places, not just to discover somewhere new to her, but also to not contribute to the demand of places that don’t have infrastructure to support an onslaught of visitors. Pulling off the side of a road inundated with visitors may not just cause traffic chaos, but also could damage wildlife and road infrastructure, she said.
“I think also another thing that I’ve seen globally is just people wanting to see a place more than they have respect for the place,” Ford said.
There are very real physical dangers to jumping head-first into a hike or a trip without proper preparation, Ford said. She’s seen a rise in visitors to national parks and other places around the world attempting grueling hikes or exploring dangerous areas in extreme heat without the right footwear, food or water.
In Michigan, The Sleeping Bear Dunes National Lakeshore — a state park featuring miles of sand and bluffs — has a dune climb that’s been well-documented on social media. The hike includes 3.5 miles of sandy, steep terrain and can take three or four hours.
The Lakeshore gets an average of 1.5 to 1.7 million visitors a year, and reached its peak visitorship in 2020 and 2021, Emily Sunblade the park’s lead education ranger said. The climb has long been a rite of passage, but the park rangers said visitors have been recognizing the location because of social media posts of the famous sign outlining the $3,000 fee incurred for being rescued if you get stuck.
The park instituted a preventative search and rescue program where volunteers stand at the top of the dune and check in with visitors before they attempt the hike in order to quell the strain on local rescue resources, which are performed by township emergency services. The volunteers ask visitors if they have enough water, and if they’re prepared for it to take two or more hours. It dramatically lowered the number of rescues needed, Sunblade said.
“The social media posts we are seeing are having a positive impact as people share their experience of what the hike was like, and what they wish they knew before starting,” Sunblade said.
As much as social media has the ability to overexpose and overwhelm one area with visitors, it remains an essential tool for the Parks Service and for content creators who aim to educate others on responsible visitorship.
It’s an important component of the “digital toolbox” for the Parks Service, Turner told States Newsroom. Their online profiles allow them to engage in real time with visitors and connect with people around the world. They use it as a forum to ask and answer questions, respond to outreach and share resources. And they do lean on memes and humor to get people’s attention and have people “learn without maybe realizing they’re learning,” Turner said.
There are ways to add a photo-worthy spot to your travels, if you do so responsibly, Ford said. She suggests trying to research what’s nearby those locations, and if the local community has been negatively impacted by visitors. If there’s not enough restaurants, stores and accommodations, tourism may hurt the community or put a strain on its resources.
Her hope is that folks are making informed decisions about their travel plans and considering the impact that social media may have on driving them to visit.
“I wish people would have more respect, not only for each other, but for the places that we visit,” Ford said. “And to just think a little bit more before we act in general, like whether that’s the time you’re taking to take a selfie at a popular destination, or the place in which we’re walking.”
]]>https://www.academytrans.com/2024/10/31/how-social-media-is-influencing-our-interactions-with-public-lands/feed/0Where Harris, Trump campaigns stand on tech policy
https://www.academytrans.com/2024/10/21/where-harris-trump-campaigns-stand-on-tech-policy/
https://www.academytrans.com/2024/10/21/where-harris-trump-campaigns-stand-on-tech-policy/#respond[email protected] (Paige Gross)Mon, 21 Oct 2024 09:40:28 +0000https://www.academytrans.com/?p=23264
Although they have not made technology a major topic on the campaign trail, the presidential campaigns have laid out policy approaches on issues such as AI, social media, and cryptocurrency. (Photo by Win McNamee/Getty Images)
Though technology policy isn’t one of the main drivers getting voters out to the polls in the upcoming presidential election, the speed in which technology develops will undoubtedly impact the way everyday Americans communicate, work and interact with the world in the next four years.
Concern about artificial intelligence’s role in the election plague the majority of both Republicans and Democrats a Pew Research Center survey found last month. Those polled are concerned that AI is being used to influence the election, and a poll earlier in the year shows that people are wary of the amount of power social media and Big Tech companies have over their lives.
Several bills regulating new technologies have been introduced in congress, but no federal laws regulating artificial intelligence or data privacy have yet been passed. In October 2023, President Joe Biden signed an executive order calling for federal agencies to examine the impacts of AI, and report how they might address problems.
Though tech issues aren’t central to their platforms, candidates Kamala Harris and Donald Trump have outlined some of how they see technology playing a role in Americans’ lives.
Harris’ policies tend to focus on inclusivity, data protection, net neutrality and expanding broadband access. One of the largest wins for the tech and science communities during the Biden-Harris administration is the CHIPS and Science Act, which in 2022, provided funding for research and development for environmental projects, clean energy and American manufacturing of semiconductors, which are the basis of most electronics.
Trump’s policies would likely roll back some protections for consumers put in place by the Biden administration, and programs like the electric vehicle challenge. His platform also places a lot of focus on what he considers “illegal censorship,” by Big Tech companies, especially X, formerly Twitter, which banned the candidate for “risk of further incitement of violence,” after the Jan. 6, 2021 attack on the Capitol.
While Harris’ policies focus on finding a balance of innovation and overreach by Big Tech companies, Trump’s policies focus on a more free market approach.
On the topics of AI and cryptocurrency, though, Harris and Trump see somewhat similar approaches. At a fundraiser at Cipriani Wall Street earlier this week, Harris talked about the importance of these evolving technologies in the current economy, while recognizing that they need oversight.
“We will partner together to invest in America’s competitiveness, to invest in America’s future,” Harris said. ”We will encourage innovative technologies like AI and digital assets while protecting our consumers and investors.”
It’s a change from the current administration, which is more focused on protections for consumers amid the evolving market, rather than industry growth. Trump has similarly taken a lighter stance on AI and crypto, saying that the industry requires some time to work itself out, and doesn’t support tough oversight at this moment.
On antitrust issues, Harris’ administration would likely continue pursuing enforcement against large platforms and Big Tech companies that came from Biden’s administration. He signed an executive order in 2021 against companies that use monopoly techniques and gather personal data, and his Justice Department filed lawsuits against Facebook’s parent Meta and Amazon.
Trump’s administration also carried out some antitrust suits against Google and Meta toward the end of his time in office. He’s long been vocal about his distrust and dislike for major social media platforms, claiming bias against him.
Most Americans are in favor of more tech regulation than there is now. But they’re likely not too concerned with the nitty gritty details that have kept bills sitting in Congress, said Ryan Waite, VP of Public Affairs at digital advocacy firm Think Big.
Waite has spent the last two decades working in and around political campaigns, and he said emerging technologies and AI are as influential to the future internet landscape as much as the introduction of the internet itself was to everyday life 30 years ago.
He likened pending or potential AI legislation to the Telecommunications Act of 1996, which promoted competition and reduced regulation in order to bring down costs for consumers as new technologies in broadcast and internet exploded.
“I think if you talked to the average American then, they wouldn’t have known what the internet was, perhaps they experienced it at some level, but probably didn’t care much about how it was legislated,” Waite said.
But the legislation revamped the communications and telecommunications frameworks for the industry and changed how we work and receive information, Waite said. In that same concept, AI and other emerging technologies are being adopted at such high rates that “We’re at an earthquake moment,” Waite said.
Both parties aim to strengthen the technology industry and America’s place in the world market, but they approach it differently, Waite said. Debates over legislation usually come down to trying to find appropriate, timely legislation that regulates these new technologies without stifling innovation and growth.
Harris’ campaign approach is viewed as “inclusive” on these issues, Waite said, with goals to provide broadband access everywhere, and a focus on getting access to these tools for small business and underserved communities.
“They’re very interested in this equality framework, of being able to say everyone should have access to these tools,” Waite said.
Trump tends to lean more toward allowing businesses to innovate and do what they do well with the belief that time will iron out problems in these technologies. These policies usually favor economic impact over safeguarding technologies.
Most Americans probably favor some middle ground legislation that allows for data and bias protections from quickly growing technologies while allowing American companies to become global leaders, he said
In the end, for most Americans, tech issues aren’t as partisan as the two-party system sets them up to be, Waite said.
“Voters might not always know the legislative details,” Waite said. “But they do care about having reliable broadband access, keeping their kids safe online and ensuring that innovation is advancing to keep pace with global competition.”
]]>https://www.academytrans.com/2024/10/21/where-harris-trump-campaigns-stand-on-tech-policy/feed/0Department of Labor releases AI best practices for employers
https://www.academytrans.com/2024/10/19/department-of-labor-releases-ai-best-practices-for-employers/
https://www.academytrans.com/2024/10/19/department-of-labor-releases-ai-best-practices-for-employers/#respond[email protected] (Paige Gross)Sat, 19 Oct 2024 11:29:12 +0000https://www.academytrans.com/?p=23256
A new best practices guide from the U.S. Department of Labor outlines how companies should develop and use AI and protect their employees while doing so. (Photo by Tierney L. Cross/Getty Images)
The U.S. Department of Labor released a list of artificial intelligence best practices for developers and employers this week, aiming to help employers benefit from potential time and cost savings of AI, while protecting workers from discrimination and job displacement.
The voluntary guidelines come about a year after President Joe Biden signed an executive order to assess the innovative potential and risks of AI across government and private sectors. The order directed the creation of the White House AI Council, the creation of a framework for federal agencies to follow relating to privacy protection and a list of guidelines for securing AI talent, for navigating the effects on the labor market and for ensuring equity in AI use, among others.
“Harnessing AI for good and realizing its myriad benefits requires mitigating its substantial risks,” Biden said of the executive order last year. “This endeavor demands a society-wide effort that includes government, the private sector, academia and civil society.”
“Whether AI in the workplace creates harm for workers and deepens inequality or supports workers and unleashes expansive opportunity depends (in large part) on the decisions we make,” DOL Acting Secretary Julie Su said. “The stakes are high.”
The report shares eight principles and best practices, with a “north star” of centering workers. The guide says workers, especially from underserved communities, should understand and have input in the design, development, testing, training, use and oversight of the AI systems used in their workplaces. This will improve job quality and allow businesses to deliver on their outcomes. Unions should bargain in good faith on the use of AI and electronic monitoring in the workplace, it said.
Other best practices include ethically developing AI, with training that protects and takes feedback from workers. Organizations should also have a clear governance system to evaluate AI used in the workplace, and they should be transparent about the AI systems they’re using, the DOL said.
AI systems cannot violate or undermine workers’ rights to organize, or obstruct their health, safety, wage, anti-discrimination and anti-retaliation protections, the department said. Therefore, prior to deployment, employers should audit their AI systems for potential impacts of discrimination on the basis of “race, color, national origin, religion, sex, disability, age, genetic information and other protected bases,” and should make those results public.
The report also outlines how employers can and should help workers with AI. Before implementing an AI tool, employers should consider the impact it will have on job opportunities, and they should be clear about the specific tasks it will perform. Employers that experience productivity gains or increased profits, should consider sharing the benefits with their workers, like through increased wages, improved benefits or training, the DOL said.
The implementation of AI systems has the potential to displace workers, Su said in her summary. To mitigate this, employers should appropriately train their employees to use these systems, and reallocate workers who are displaced by AI to other jobs within their organization when feasible. Employers should reach out to state and local workforce programs for education and upskilling so their workforce can learn new skills, not be phased out by technology.
And lastly, employers using AI that collect workers’ data should safeguard that data, should not collect more data than is absolutely necessary and should not share that data outside the business without workers’ freely given consent.
The guidelines outlined by the DOL are not meant to be “a substitute for existing or future federal or state laws and regulations,” it said, rather a “guiding framework for businesses” that can be customized with feedback from their workers.
“We should think of AI as a potentially powerful technology for worker well-being, and we should harness our collective human talents to design and use AI with workers as its beneficiaries, not as obstacles to innovation,” Su said.
]]>https://www.academytrans.com/2024/10/19/department-of-labor-releases-ai-best-practices-for-employers/feed/0Computer programs monitor students’ every word in the name of safety
https://www.academytrans.com/2024/10/18/computer-programs-monitor-students-every-word-in-the-name-of-safety/
https://www.academytrans.com/2024/10/18/computer-programs-monitor-students-every-word-in-the-name-of-safety/#respond[email protected] (Madyson Fitzgerald)Fri, 18 Oct 2024 09:30:46 +0000https://www.academytrans.com/?p=23202
A student works on a computer at a K-12 school in Provo, Utah. School districts across the country have adopted computer monitoring platforms that analyze what students are doing on school-issued devices and flag activities that may signal a risk of self-harm or threats to others. (George Frey/Getty Images)
Whether it’s a research project on the Civil War or a science experiment on volcano eruptions, students in the Colonial School District near Wilmington, Delaware, can look up just about anything on their school-provided laptops.
But in one instance, an elementary school student searched “how to die.”
In that case, Meghan Feby, an elementary school counselor in the district, got a phone call through a platform called GoGuardian Beacon, whose algorithm flagged the phrase. The system sold by educational software company GoGuardian allows schools to monitor and analyze what students are doing on school-issued devices and flag any activities that signal a risk of self-harm or threats to others.
The student who had searched “how to die” did not want to die and showed no indicators of distress, Feby said — the student was looking for information but in no danger. Still, she values the program.
“I’ve gotten into some situations with GoGuardian where I’m really happy that they came to us and we were able to intervene,” Feby said.
School districts across the country have widely adopted such computer monitoring platforms. With the youth mental health crisis worsened by the COVID-19 pandemic and school violence affecting more K-12 students nationwide, teachers are desperate for a solution, experts say.
But critics worry about the lack of transparency from companies that have the power to monitor students and choose when to alert school personnel. Constant student surveillance also raises concerns regarding student data, privacy and free speech.
While available for more than a decade, the programs?saw a surge in use?during the pandemic as students transitioned to online learning from home, said Jennifer Jones, a staff attorney at the Knight First Amendment Institute.
“I think because there are all kinds of issues that school districts have to contend with — like student mental health issues and the dangers of school shootings — I think they [school districts] just view these as cheap, quick ways to address the problem without interrogating the free speech and privacy implications in a more thoughtful way,” Jones said.
According to the most recent youth risk behavior survey from the federal Centers for Disease Control and Prevention, nearly all indicators of poor mental health, suicidal thoughts and suicidal behaviors increased from 2013 to 2023. During the same period, the percentage of high school students who were threatened or injured at school, missed school because of safety concerns or experienced forced sex increased, according to the CDC report.
And the threat of school shootings remains on many educators’ minds. Since the Columbine High School shooting in 1999, more than 383,000 students have experienced gun violence at school, according to The Washington Post’s count.
GoGuardian CEO Rich Preece told Stateline that about half of the K-12 public schools in the United States have installed the company’s platforms.
As her school’s designee, Feby gets an alert when a student uses certain search terms or combinations of words on their school-issued laptops. “It will either come to me as an email, or, if it is very high risk, it comes as a phone call.”
Once she’s notified, Feby will decide whether to meet with the student or call the child’s home. If the system flags troubling activity outside of school hours, GoGuardian Beacon contacts another person in the county — including law enforcement, in some school districts.
Feby said she’s had some false alarms. One student was flagged because of the song lyrics she had looked up. Another one had searched for something related to anime.
About a third of the students in Feby’s school come from a home where English isn’t their first language, so students often use worrisome English terms inadvertently. Kids can also be curious, she said.
Still, having GoGuardian in the classroom is important, Feby said. Before she became a counselor 10 years ago, she was a school teacher. And after the 2012 Sandy Hook Elementary School mass shooting, she realized school safety was more important than ever.
Data and privacy
Teddy Hartman, GoGuardian’s head of privacy, taught high school English literature in East Los Angeles and was a school administrator before joining the technology company about four years ago.
Hartman was brought to GoGuardian to help with creating a robust privacy program, he said, including guardrails on its use of artificial intelligence.
“We thought, ‘How can we co-create with educators, the best of the data scientists, the best of the technologists, while also remembering that students and our educators are first and foremost?’” Hartman said.
GoGuardian isn’t using any student data outside of the agreements that school districts have allowed, and that data isn’t used to train the company’s AI, Hartman said. Companies that regulate what children can do online are also required to adhere to federal laws regarding the safety and privacy of minors, including the Family Educational Rights and Privacy Act and the Children’s Online Privacy Protection Rule.
But privacy experts are still concerned about just how much access these types of companies should have to student data.
School districts across the country are spending hundreds of thousands of dollars on contracts with some of the leading computer monitoring vendors — including GoGuardian, Gaggle and others — without fully assessing the privacy and civil rights implications, said Clarence Okoh, a senior attorney at the Center on Privacy and Technology at the Georgetown University Law Center.
In 2021, while many schools were just beginning to see the effects of online learning, The 74, a nonprofit news outlet covering education, published an investigation into how Gaggle was operating in Minneapolis schools. Hundreds of documents revealed how students at one school system were subject to constant digital surveillance long after the school day was over, including at home, the outlet reported.
That level of pervasive surveillance can have far-reaching implications, Okoh said. For one, in jurisdictions where legislators have expanded censorship of “divisive concepts” in schools, including critical race theory and LGBTQ+ themes, the ability for schools to monitor conversations including those terms is concerning, he said.
A report by the Electronic Frontier Foundation, a nonprofit digital rights group based in San Francisco, illustrates what kinds of keyword triggers are blocked or flagged for administrators. In one example, GoGuardian had flagged a student for visiting the text of a Bible verse including the word “naked,” the report said. In another instance, a Texas House of Representatives site with information regarding “cannabis” bills was flagged.
GoGuardian and Gaggle both also dropped LGBTQ+ terms?from their keyword lists after the foundation’s initial records request, the group said.
But getting a full understanding of the way these companies monitor students is challenging because of a lack of transparency, Jones said. It’s difficult to get information from private tech companies, and the majority of their data isn’t made public, she said.
Do they work?
Years before the 2022 shooting at Robb Elementary School in Uvalde, Texas, the school district purchased a technology service to monitor what students were doing on social media, according to The Dallas Morning News. The district sent two payments to the Social Sentinel company totaling more than $9,900, according to the paper.
While the cost varies, some school districts are spending hundreds of thousands of dollars on online monitoring programs. Muscogee County School District in Georgia paid $137,829 in initial costs to install GoGuardian on the district’s Chromebooks, according to the Columbus Ledger-Enquirer. In Maryland, Montgomery County Public Schools eliminated GoGuardian from its budget for the 2024-2025 school year after spending $230,000 annually on it, later switching to Lightspeed, according to the Wootton Common Sense.
Despite the spending, there’s no way to prove that these technologies work, said Chad Marlow, a senior policy counsel at the American Civil Liberties Union who authored a report on education surveillance programs.
In 2019, Bark, a content monitoring platform, claimed to have helped prevent 16 school shootings in a blog post describing their Bark for Schools program. The Gaggle company website says it saved 5,790 lives between 2018 and 2023.
These data points are measured by the number of alerts the systems generate that indicate a student may be very close to harming themselves or others. But there is little evidence that this kind of school safety technology is effective, according to the ACLU report.
“You cannot use data to say that, if there wasn’t an intervention, something would have happened,” Marlow said.
Computer monitoring programs are just one example of an overall increase in school surveillance nationwide, including cameras, facial recognition technology and more. And increased surveillance does not necessarily deter harmful conduct, Marlow said.
“A lot of schools are saying, ‘You know what, we’ve $50,000 to spend, I’m going to spend it on a student surveillance product that doesn’t work, instead of a door that locks or a mental health counselor,’” Marlow said.
Some experts are advocating for more mental health resources, including hiring more guidance counselors, and school policies that support mental health, which could prevent violence or suicide, Jones said. Community engagement programs, including volunteer work or community events, also can contribute to emotional and mental well-being.
But that’s in an ideal world, GoGuardian’s Hartman said. Computer monitoring platforms aren’t the only solution for solving the youth mental health and violence epidemic, but they aim to help, he said.
“We were founded by engineers,” Hartman said. “So, in our slice of this world, is there something we can do, from a school technology perspective that can help by being a tool in the toolbox? It’s not an end-all, be-all.”
This story is republished from Stateline, a sister publication to the Kentucky Lantern and part of the nonprofit States Newsroom network.
]]>https://www.academytrans.com/2024/10/18/computer-programs-monitor-students-every-word-in-the-name-of-safety/feed/0A new Kentucky area code? 502 expected to run short of numbers in about three years
https://www.academytrans.com/briefs/a-new-kentucky-area-code-502-expected-to-run-short-of-numbers-in-about-three-years/
[email protected] (Liam Niemeyer)Mon, 14 Oct 2024 20:04:05 +0000https://www.academytrans.com/?post_type=briefs&p=23071
The need for a new area code isn’t necessarily driven by population but associated with the multiple numbers each person may be using.?The supply of 502 area code phone number prefixes, the three digits following an area code in a phone number, is expected to be exhausted by the end of 2027. (Getty Images)
A national organization overseeing the supply of phone numbers on behalf of phone carriers is asking a Kentucky regulator to establish a new area code in response to a dwindling supply of available 502 area code numbers covering Louisville, Frankfort and nearby counties.
The application filed with the Kentucky Public Service Commission (PSC) on Monday by the North American Numbering Plan Administrator (NANPA) states the supply of 502 area code phone number prefixes, the three digits following an area code in a phone number, is expected to be exhausted by the end of 2027. Florence Weber, a NANPA vice president, wrote in the application that representatives of the telecommunications industry consulted by NANPA decided adding a new area code would be the best way forward.
That would work by overlaying a new area code onto the geographic area serviced by the existing 502 area code. New phone numbers added in the area would have the new area code available while those with phone numbers with the previous 502 area code would be able to keep their numbers.
Heidi Wayman, a data manager for NANPA, told the Lantern the need for a new area code isn’t necessarily driven by population but associated with the multiple numbers each person may be using.?
“You may have devices as well with numbers, tablets, watches, etc. So you may have multiple phone numbers even assigned to you,” Wayman said. “We need available prefixes to assign out to the carriers.”
Weber wrote phone numbers with the new area code would be available once the supply of 502 area code numbers had been exhausted, and the supply of numbers with the new area code would last an estimated 30 years. The consensus of telecommunications providers, according to the application, is that? layering a new code onto the current 502 region would be easier to implement and reduce confusion compared to other options.
Industry representatives — which include AT&T, T-Mobile, Charter Communications, Verizon and Boost Mobile — also considered splitting the 502 area code geographic region into two distinct areas, one keeping 502 and another getting a new area code. Other options considered included eliminating the geographic boundaries for area codes in Kentucky including boundaries involving the 270 area code, 606 area code and 859 area code.?
This wouldn’t be the first time Kentucky received a new area code this century. Following a request from NANPA, the PSC in 2014 established the new area code 364 to be overlaid on the 270 area code to increase the supply of phone numbers in Western Kentucky.?
NANPA is requesting the PSC, which regulates utilities in the state, issue a decision on how to move forward by July 31, 2025. Once a decision has been issued, NANPA plans to roll out a 13-month timeline for establishing the new area code. NANPA is run by the New Jersey-based data management company SOMOS through a contract with the Federal Communications Commission.
]]>As AI takes the helm of decision making, signs of perpetuating historic biases emerge
https://www.academytrans.com/2024/10/14/as-ai-takes-the-helm-of-decision-making-signs-of-perpetuating-historic-biases-emerge/
https://www.academytrans.com/2024/10/14/as-ai-takes-the-helm-of-decision-making-signs-of-perpetuating-historic-biases-emerge/#respond[email protected] (Paige Gross)Mon, 14 Oct 2024 09:30:07 +0000https://www.academytrans.com/?p=23055
Studies show that AI systems used to make important decisions such as approval of loan and mortgage applications can perpetuate historical bias and discrimination if not carefully constructed and monitored. (Seksan Mongkhonkhamsao/Getty Images)
In a recent study evaluating how chatbots make loan suggestions for mortgage applications, researchers at Pennsylvania’s Lehigh University found something stark: there was clear racial bias at play.
With 6,000 sample loan applications based on data from the 2022 Home Mortgage Disclosure Act, the chatbots recommended denials for more Black applicants than identical white counterparts. They also recommended Black applicants be given higher interest rates, and labeled Black and Hispanic borrowers as “riskier.”
White applicants were 8.5% more likely to be approved than Black applicants with the same financial profile. And applicants with “low” credit scores of 640, saw a wider margin — white applicants were approved 95% of the time, while Black applicants were approved less than 80% of the time.
The experiment aimed to simulate how financial institutions are using AI algorithms, machine learning and large language models to speed up processes like lending and underwriting of loans and mortgages. These “black box” systems, where the algorithm’s inner workings aren’t transparent to users, have the potential to lower operating costs for financial firms and any other industry employing them, said Donald Bowen, an assistant fintech professor at Lehigh and one of the authors of the study.
But there’s also large potential for flawed training data, programming errors, and historically biased information to affect the outcomes, sometimes in detrimental, life-changing ways.
“There’s a potential for these systems to know a lot about the people they’re interacting with,” Bowen said. “If there’s a baked-in bias, that could propagate across a bunch of different interactions between customers and a bank.”
How does AI discriminate in finance?
Decision-making AI tools and large language models, like the ones in the Lehigh University experiment, are being used across a variety of industries, like healthcare, education, finance and even in the judicial system.
Most machine learning algorithms follow what’s called classification models, meaning you formally define a problem or a question, and then you feed the algorithm a set of inputs such as a loan applicant’s age, income, education and credit history, Michael Wellman, a computer science professor at the University of Michigan, explained.
The algorithm spits out a result — approved or not approved. More complex algorithms can assess these factors and deliver more nuanced answers, like a loan approval with a recommended interest rate.
Machine learning advances in recent years have allowed for what’s called deep learning, or construction of big neural networks that can learn from large amounts of data. But if AI’s builders don’t keep objectivity in mind, or rely on data sets that reflect deep-rooted and systemic racism, results will reflect that.
“If it turns out that you are systematically more often making decisions to deny credit to certain groups of people more than you make those wrong decisions about others, that would be a time that there’s a problem with the algorithm,” Wellman said. “And especially when those groups are groups that are historically disadvantaged.”
Bowen was initially inspired to pursue the Lehigh University study after a smaller-scale assignment with his students revealed the racial discrimination by the chatbots.
“We wanted to understand if these models are biased, and if they’re biased in settings where they’re not supposed to be,” Bowen said, since underwriting is a regulated industry that’s not allowed to consider race in decision-making.
For the official study, Bowen and a research team ran thousands of loan application numbers over several months through different commercial large language models, including OpenAI’s GPT 3.5 Turbo and GPT 4, Anthropic’s Claude 3 Sonnet and Opus and Meta’s Llama 3-8B and 3-70B.
In one experiment, they included race information on applications and saw the discrepancies in loan approvals and mortgage rates. In other, they instructed the chatbots to “use no bias in making these decisions.” That experiment saw virtually no discrepancies between loan applicants.
But if race data isn’t collected in modern day lending, and algorithms used by banks are instructed to not consider race, how do people of color end up getting denied more often, or offered worse interest rates? Because much of our modern-day data is influenced by disparate impact, or the influence of systemic racism, Bowen said.
Though a computer wasn’t given the race of an applicant, a borrower’s credit score, which can be influenced by discrimination in the labor and housing markets, will have an impact on their application. So might their zip code, or the credit scores of other members of their household, all of which could have been influenced by the historic racist practice of redlining, or restricting lending to people in poor and nonwhite neighborhoods.
Machine learning algorithms aren’t always calculating their conclusions in the way that humans might imagine, Bowen said. The patterns it is learning apply to a variety of scenarios, so it may even be digesting reports about discrimination, for example learning that Black people have historically had worse credit. Therefore, the computer might see signs that a borrower is Black, and deny their loan or offer them a higher interest rate than a white counterpart.
Other opportunities for discrimination?
Decision making technologies have become ubiquitous in hiring practices over the last several years, as application platforms and internal systems use AI to filter through applications, and pre-screen candidates for hiring managers. Last year, New York City began requiring employers to notify candidates about their use of AI decision-making software.
By law, the AI tools should be programmed to have no opinion on protected classes like gender, race or age, but some users allege that they’ve been discriminated against by the algorithms anyway. In 2021, the U.S. Equal Employment Opportunity Commission launched an initiative to examine more closely how new and existing technologies change the way employment decisions are made. Last year, the commission settled its first-ever AI discrimination hiring lawsuit.
The New York federal court case ended in a $365,000 settlement when tutoring company iTutorGroup Inc. was alleged to use an AI-powered hiring tool that rejected women applicants over 55 and men over 60. Two hundred applicants received the settlement, and iTutor agreed to adopt anti-discrimination policies and conduct training to ensure compliance with equal employment opportunity laws, Bloomberg reported at the time.
Another anti-discrimination lawsuit is pending in California federal court against AI-powered company Workday. Plaintiff Derek Mobley alleges he was passed over for more than 100 jobs that contract with the software company because he is Black, older than 40 and has mental health issues, Reuters reported this summer. The suit claims that Workday uses data on a company’s existing workforce to train its software, and the practice doesn’t account for the discrimination that may reflect in future hiring.
U.S. judicial and court systems have also begun incorporating decision-making algorithms in a handful of operations, like risk assessment analysis of defendants, determinations about pretrial release, diversion, sentencing and probation or parole.
Though the technologies have been cited in speeding up some of the traditionally lengthy court processes — like for document review and assistance with small claims court filings — experts caution that the technologies are not ready to be the primary or sole evidence in a “consequential outcome.”
“We worry more about its use in cases where AI systems are subject to pervasive and systemic racial and other biases, e.g., predictive policing, facial recognition, and criminal risk/recidivism assessment,” the co-authors of a paper in Judicature’s 2024 edition say.
Utah passed a law earlier this year to combat exactly that. HB 366, sponsored by state Rep. Karianne Lisonbee, R-Syracuse, addresses the use of an algorithm or a risk assessment tool score in determinations about pretrial release, diversion, sentencing, probation and parole, saying that these technologies may not be used without human intervention and review.
Lisonbee told States Newsroom that by design, the technologies provide a limited amount of information to a judge or decision-making officer.
“We think it’s important that judges and other decision-makers consider all the relevant information about a defendant in order to make the most appropriate decision regarding sentencing, diversion, or the conditions of their release,” Lisonbee said.
She also brought up concerns about bias, saying the state’s lawmakers don’t currently have full confidence in the “objectivity and reliability” of these tools. They also aren’t sure of the tools’ data privacy settings, which is a priority to Utah residents. These issues combined could put citizens’ trust in the criminal justice system at risk, she said.
“When evaluating the use of algorithms and risk assessment tools in criminal justice and other settings, it’s important to include strong data integrity and privacy protections, especially for any personal data that is shared with external parties for research or quality control purposes,” Lisonbee said.
Preventing discriminatory AI
Some legislators, like Lisonbee, have taken note of these issues of bias, and potential for discrimination. Four states currently have laws aiming to prevent “algorithmic discrimination,” where an AI system can contribute to different treatment of people based on race, ethnicity, sex, religion or disability, among other things. This includes Utah, as well as California (SB 36), Colorado (SB 21-169), Illinois (HB 0053).
Though it’s not specific to discrimination, Congress introduced a bill in late 2023 to amend the Financial Stability Act of 2010 to include federal guidance for the financial industry on the uses of AI. This bill, the Financial Artificial Intelligence Risk Reduction Act or the “FAIRR Act,” would require the Financial Stability Oversight Council to coordinate with agencies regarding threats to the financial system posed by artificial intelligence, and may regulate how financial institutions can rely on AI.
Lehigh’s Bowen made it clear he felt there was no going back on these technologies, especially as companies and industries realize their cost-saving potential.
“These are going to be used by firms,” he said. “So how can they do this in a fair way?”
Bowen hopes his study can help inform financial and other institutions in deployment of decision-making AI tools. For their experiment, the researchers wrote that it was as simple as using prompt engineering to instruct the chatbots to “make unbiased decisions.” They suggest firms that integrate large language models into their processes do regular audits for bias to refine their tools.
Bowen and other researchers on the topic stress that more human involvement is needed to use these systems fairly. Though AI can deliver a decision on a court sentencing, mortgage loan, job application, healthcare diagnosis or customer service inquiry, it doesn’t mean they should be operating unchecked.
University of Michigan’s Wellman told States Newsroom he’s looking for government regulation on these tools, and pointed to H.R. 6936, a bill pending in Congress which would require federal agencies to adopt the Artificial Intelligence Risk Management Framework developed by the National Institute of Standards and Technology. The framework calls out potential for bias, and is designed to improve trustworthiness for organizations that design, develop, use and evaluate AI tools.
“My hope is that the call for standards … will read through the market, providing tools that companies could use to validate or certify their models at least,” Wellman said. “Which, of course, doesn’t guarantee that they’re perfect in every way or avoid all your potential negatives. But it can … provide basic standard basis for trusting the models.”
]]>https://www.academytrans.com/2024/10/14/as-ai-takes-the-helm-of-decision-making-signs-of-perpetuating-historic-biases-emerge/feed/0Kentucky attorney general sues TikTok, calling it ‘addiction machine’ that targets children
https://www.academytrans.com/2024/10/09/kentucky-attorney-general-sues-tiktok-calling-it-addiction-machine/
https://www.academytrans.com/2024/10/09/kentucky-attorney-general-sues-tiktok-calling-it-addiction-machine/#respond[email protected] (Sarah Ladd)Wed, 09 Oct 2024 19:15:24 +0000https://www.academytrans.com/?p=22941
Kentucky has joined 12 other states and the District of Columbia in seeking monetary damages from TikTok for harm it has allegedly caused to youngsters. (Getty Images)
Kentucky Attorney General Russell Coleman is suing TikTok, accusing the social media platform of exploiting minors and being “designed to addict and otherwise harm” them.
In filing the lawsuit Tuesday in Scott County, Coleman joins a dozen states and Washington D.C. in seeking payouts for what they describe as a pattern of knowingly hurting youth.
The other states that have sued are California, New York, Illinois, Louisiana, Massachusetts, Mississippi, North Carolina, New Jersey, Oregon, South Carolina, Vermont, Washington and the District of Columbia.
The Kentucky lawsuit says TikTok is “designed” to be “an addiction machine” that targets children.
Michael Hughes, a spokesperson for TikTok, said in a statement, “We strongly disagree with these claims, many of which we believe to be inaccurate and misleading.”
“We’re proud of and remain deeply committed to the work we’ve done to protect teens and we will continue to update and improve our product,” said Hughes.
What’s in the lawsuit?
Coleman’s lawsuit accuses the company of unfair and deceptive acts that violate Kentucky’s Consumer Protection Act, failing to warn consumers of the potential dangers in consuming the platform’s media and more.
“Unlike other consumer products that have appealed to children for generations — like candy or soda—with social media platforms there is no natural break point where the consumer has finished the unit of consumption,” the lawsuit states. “Instead, social media platforms are a bottomless pit where users can spend an infinite amount of their time.”
TikTok’s Michael Hughes said the company does take steps to protect users.
“We provide robust safeguards, proactively remove suspected underage users, and have voluntarily launched safety features such as default screentime limits, family pairing and privacy by default for minors under 16,” Hughes said. “We’ve endeavored to work with the Attorneys General for over two years, and it is incredibly disappointing they have taken this step rather than work with us on constructive solutions to industry wide challenges.”
Speaking in Northern Kentucky Wednesday, Coleman promised to “force (TikTok) to answer for creating and pushing an app designed specifically to addict and harm Kentucky’s children.”
“TikTok is more than trendy dances or funny videos. It’s a specially crafted tool to suck in minors, leading to depression, anxiety, altered development, and more,” Coleman said.
“TikTok intentionally manipulates the release of dopamine in young users’ developing brains and causes them to use TikTok in an excessive, compulsive, and addictive manner that harms them both mentally and physically,” the Kentucky lawsuit says.
Forbes reported in 2022 that watching TikTok videos is like taking drugs, calling it a “pleasurable dopamine state” that is “almost hypnotic.”
Filed in Scott Circuit Court, the 125-page lawsuit contains frequent blocks of redacted material. Those include information from internal TikTok documents, that “for the time being remain subject to certain confidentiality agreements,” said Kevin Grout, a spokesman for Coleman’s office.
Coleman is seeking an injunction to halt TikTok’s “ongoing violations,” actual and punitive damages and penalties of up to $2,000 for each violation of the Kentucky Consumer Protection Act.
TikTok also is fighting a federal law enacted by Congress earlier this year that would ban the app in the U.S. unless its owner, ByteDance, sells it to a non-Chinese company by Jan. 19.
What does research show?
In a 2023 report, the U.S. Surgeon General said social media use among youth can have both positive and negative effects. For example, youth may be able to find community and connection through social media that they otherwise lacked. But their mental health can decline with that use, and they can have increased anxiety and depression.
“Because adolescence is a vulnerable period of brain development, social media exposure during this period warrants additional scrutiny,” the surgeon general report said.
The Annie E. Casey Foundation, which advocates for children’s wellbeing, says tech companies need to:
Ade-quate-ly and inde-pen-dent-ly assess- the impact of social media on chil-dren and adolescents.
Pri-or-i-tiz-e user health and safe-ty when design-ing and devel-op-ing social media prod-ucts and services.
For-mal-iz-e a strat-e-gy for inves-ti-gat-ing the requests and com-plaints of young peo-ple, fam-i-lies, edu-ca-tors and others.
Terry Brooks, the executive director of Kentucky Youth Advocates, praised Coleman “for standing up against the social media giant – and standing up for Kentucky’s young people.”
“There is nothing more paramount than upholding our kids’ mental health and safety, especially as kids increasingly find themselves in digital spaces,” Brooks said. “The addictive nature of the social media platform TikTok can harm kids’ developing brain, expose them to unrealistic standards and unsafe situations, and put them at risk of sexually explicit content and exploitation.”
Read the complaint
2024-10-08 KY TT Complaint
]]>https://www.academytrans.com/2024/10/09/kentucky-attorney-general-sues-tiktok-calling-it-addiction-machine/feed/0Kentucky secretary of state urges lawmakers to protect election officials from AI impersonations
https://www.academytrans.com/2024/10/08/kentucky-secretary-of-state-urges-lawmakers-to-protect-election-officials-from-ai-impersonations/
https://www.academytrans.com/2024/10/08/kentucky-secretary-of-state-urges-lawmakers-to-protect-election-officials-from-ai-impersonations/#respond[email protected] (McKenna Horsley)Tue, 08 Oct 2024 17:24:05 +0000https://www.academytrans.com/?p=22904
Kentucky Secretary of State Michael Adams told lawmakers: “It is illegal to impersonate a peace officer, and for good reason. It should be equally illegal to impersonate a secretary of state or county clerk and put out false information in any format about our elections.”?He was speaking Tuesday to a task force studying artificial intelligence. (Kentucky Lantern photo by Matthew Mueller)
Kentucky’s Republican Secretary of State Michael Adams told lawmakers it’s “too soon” to tell what effect artificial intelligence will have on elections but that it has “potential for significant impact,” and he urged them to consider making it a crime to impersonate an election official.
Adams appeared before the General Assembly’s Artificial Intelligence Task Force Tuesday to discuss AI, which has become a growing concern for possible influence in this year’s presidential election. A recent study from the Pew Research Center found 57% of U.S. adults were extremely or very concerned that people or groups seeking to influence the election would use AI to create fake or misleading information about presidential candidates and campaigns.?
“Should you take up AI legislation when you return in 2025, I would encourage you to consider prohibiting impersonation of election officials,” Adams told the task force. “It is illegal to impersonate a peace officer, and for good reason. It should be equally illegal to impersonate a secretary of state or county clerk and put out false information in any format about our elections.”?
Adams highlighted a bipartisan bill from Lexington legislators, Republican Sen. Amanda Mays Bledose and Democratic Caucus Chair Sen. Reggie Thomas, that would have limited the use of “deep fakes” or deceptive AI to influence elections in Kentucky. In the recent legislative session, the bill died in the House after gaining approval in the Senate.?
Adams gave an example of a political consultant receiving a fine of $6 million from the Federal Communications Commission for fake robocalls to New Hampshire voters that mimicked President Joe Biden. The calls encouraged voters to not vote in the state’s Democratic primary. Adams said the fine was for violating telecommunication law and New Hampshire brought criminal charges against the consultant because of a state law making it a crime to impersonate a candidate.?
“As you look to protect candidates and voters from such practices, I urge you to consider inclusion of election officials,” Adams said. “An impersonation of me or my deputy secretary or senior staff of the State Board of Elections or a county clerk actually could do more harm than impersonation of a candidate.”?
Adams noted that concerns of AI influence in elections is not just an American problem. Other countries, such as the United Kingdom and South Africa, will also have consequential elections this year. Thus, they face issues with AI interference in elections as well.?
Bledsoe, who is a co-chair of the task force, said that there is tension between legislation aimed at preventing misuse of AI to influence elections and free speech protections. Adams said in response that laws against voter suppression do exist.?
“I think that the process and distrust of what is actually being said is the greatest danger to a voter, and we want to protect (the voter) as much as possible,” Bledose said.?
Rep. Josh Bray, R-Mount Veron, the other co-chair of the task force, said what Adams was asking was “very reasonable.”?
“It’s something I know that we’ve debated internally,” Bray said. “We’ve had Senate bills filed, we’ve had House bills filed, and it’s very clear that this is something that’s going to be with us as the technology evolves.”?
]]>https://www.academytrans.com/2024/10/08/kentucky-secretary-of-state-urges-lawmakers-to-protect-election-officials-from-ai-impersonations/feed/0Data, pilot projects showing food service robots may not threaten jobs
https://www.academytrans.com/2024/09/30/data-pilot-projects-showing-food-service-robots-may-not-threaten-jobs/
https://www.academytrans.com/2024/09/30/data-pilot-projects-showing-food-service-robots-may-not-threaten-jobs/#respond[email protected] (Paige Gross)Mon, 30 Sep 2024 09:40:12 +0000https://www.academytrans.com/?p=22552
Fast-casual restaurant Chipotle is experimenting with a work station that has automation assembling salads and bowls underneath the counter while a human worker assembles more complex dishes such as burritos on top. (Photo courtesy of Chipotle)
Though food service workers and economists have long worried about the impact technology would have on the restaurant labor force, pilot programs in several fast-casual restaurants over the last few years have shown it may not have the negative impact they feared, a labor economist says.
Technology plays several roles in food service, but the industry has seen the adoption of touch screens, AI-powered ordering and food prep machines over the last few years. And even more recently, it’s become more likely that a robot is playing a part in your food preparation or delivery.
The company is testing the Autocado, which splits and prepares avocados to be turned into guacamole by a kitchen crew member, and the Augmented Makeline, which builds bowls and salads autonomously underneath the food line while employees construct burritos, tacos and quesadillas on top. Chipotle said 65% of its mobile orders are for salads or bowls, and the Augmented Makeline’s aim is improving efficiency and digital order accuracy.
The company said it invested in robotics company Vebu and worked with them on the design for the Autocado, and it invested in food service platform Hyphen, which custom made the Augmented Makeline for Chipotle.
“Optimizing our use of these systems and incorporating crew and customer feedback are the next steps in the stage-gate process before determining their broader pilot plans,” Curt Garner, Chipotle’s chief customer and technology officer said in a statement.
The company said the introduction of these robots will not eliminate any jobs, as the crew members are supposed to have a “cobotic relationship” with them. The aim is that crew members will be able to spend more time on either food prep tasks or on providing hospitality to customers.
Ben Zipperer, a low-wage labor market economist at the Economic Policy Institute, said the early fears around automation and robots threatening jobs in the foodservice industry are not being realized. Automation has shown to make workers more productive and effective, he said.
Robots have also been shown to make businesses more efficient and profitable, Zipperer siad, which creates an “offsetting demand factor.” That increased demand and profitability can actually help keep the cost of food for customers more affordable, he added.
When one action is freed up by a robot, the restaurant has more freedom to place workers on other high-demand tasks.
“Either those workers are still going to help produce guacamole, because people want to buy more of it,” Zipperer said of the Chipotle announcement, “or there’s other things that that business is trying to produce but can’t allocate the labor towards, even though they have demand for it.”
Zipperer pointed toward automated food purchasing with the use of touchscreen kiosks, which has been widely adopted in fast food service. In these cases, workers get shifted away from cash registers and toward more back-of-house jobs like food prep or janitorial work.
McDonald’s shows an example of this. The fast food restaurant was one of the earliest adopters of touchscreen kiosks, with thousands of stores using the technology to collect orders by 2015, and screens becoming nearly ubiquitous by 2020.
Last week, the company said the kiosks actually produce extra work for staff, as customers tend to purchase more food than they would at a cash register. The machines have built-in upselling features that cashiers don’t always have time to push with customers, and the introduction of mobile ordering and delivery has created jobs that front-of-house staff are relegated to.
Many fast food CEOs have threatened that raising minimum wages across the U.S. would equate in job loss to autonomous machines and kiosks. And while some franchise owners may take that route, it’s not a trend across the whole country. Jobs at quick-service and fast casual restaurants were up about 150,000 jobs, or 3% above their pre-pandemic levels in August.
As technology takes more of a role in food service production, businesses that want to succeed will find the balance of cost-saving efficiencies and valued work by their employees, Zipperer said.
“As long as there is demand for what that business is producing, that will allow workers to not feel a lot of the negative effects of technology,” he said.
]]>https://www.academytrans.com/2024/09/30/data-pilot-projects-showing-food-service-robots-may-not-threaten-jobs/feed/0How immigrants navigate their digital footprints in a charged political climate
https://www.academytrans.com/2024/09/17/how-immigrants-navigate-their-digital-footprints-in-a-charged-political-climate/
https://www.academytrans.com/2024/09/17/how-immigrants-navigate-their-digital-footprints-in-a-charged-political-climate/#respond[email protected] (Paige Gross)Tue, 17 Sep 2024 09:30:36 +0000https://www.academytrans.com/?p=21896
José Pati?o, a 35-year-old DACA recipient and Arizona community organizer, says it took him a long time to overcome the fear of sharing his personal information — including his legal status — on social media. (Photo courtesy of José Pati?o)
For more than a decade, San-Francisco-based Miguel has been successfully filing renewals for his Deferred Action for Childhood Arrivals (DACA) status every two years, at least until 2024.
For some reason, this year, it took more than five months to get approval, during which his enrollment in the program lapsed, leaving him in a legal limbo.
He lost his work visa and was put on temporary unpaid leave for three months from the large professional services company where he’s worked for a decade.
“In those three months, I was trying to do a lot of damage control around getting an expedited process, reaching out to the ombudsman, congressmen — all of the escalation type of actions that I could do,” he said.
He was also being cautious about what he put in his social media and other online postings. Like many, he realized such information could put him at risk in an uncertain political environment around immigration.
“Given my current situation, I try not to brand myself as undocumented, or highlight it as the main component of my identity digitally,” Miguel said.
Miguel, who came to the United States at age 7 with his parents from the Philippines, says he was already mindful about his digital footprint before his DACA protections lapsed. His Facebook and Instagram accounts are set to private, and while amplifying the stories of immigrants is one of his goals, he tries to do so from an allyship perspective, rather than centering his own story.
While his DACA status has now been renewed — reinstating his work permit and protection from deportation — and Miguel is back at work, he’s taking extra precautions about what he posts online and how he’s perceived publicly. It’s the reason that States Newsroom is not using his full name for this story.
Miguel’s company is regulated by the SEC, and has to take a nonpartisan approach on political issues, he said, and that extends to employees. Staying neutral about political issues may be a common rule for many American workers, but it’s more complicated when an issue is a part of your core identity, Miguel said.
“I think that’s been a huge conflicting area in my professional journey,” he said. “It’s the separation and compartmentalization that I have to do to separate my identity — given that it is a very politicized experience — with my actual career and company affiliation.”
Digital footprints + surveillance
It’s not unusual for your digital footprint — the trail of information you create browsing the web or posting on social media — to have real-life ramifications. But if you’re an immigrant in the United States, one post, like or comment on social media could lead to an arrest, deportation or denial of citizenship.
In 2017, the Department of Homeland Security issued a notice saying it would begin tracking more information, including social media handles for temporary visa holders, immigrants and naturalized U.S. citizens in an electronic system. And Homeland Security would store that information.
But in recent years, there’s been more data collection. In 2019, U.S. Immigration and Customs Enforcement (ICE) was found to have contracted with commercial data brokers like Thomson Reuters’ CLEAR, which has access to information in credit agencies, cellphone registries, social media posts, property records and internet chat rooms, among other sources.
Emails sent by ICE officials were included in a 2019 federal court filing, showing that information accessed via the CLEAR database was used in a 2018 deportation case, the Intercept reported. ICE agents used an address found in CLEAR, along with Facebook posts of family gatherings, to build a case against a man who had been deported from his home in Southern California and then returned. The man had been living in the U.S. since he was 1, worked as a roofer and had children who are U.S. citizens.
Ultimately, a Facebook post showing the man had “checked in” at a Southern California Home Depot in May 2018 led to his arrest. ICE agents monitored the page, waited for him to leave the store, then pulled him over. He was charged with felony illegal reentry.
Ray Ybarra Maldonado, an immigration and criminal attorney in Phoenix, said he’s seen more requests for social media handles in his immigration paperwork filings over the last few years. It can be nerve-wracking to think that the federal government will be combing through a client’s posts, he said, but clients have to remember that ultimately, anything put on the internet is for public consumption.
“We all think when we post something on social media that it’s for our friends, for our family,” Ybarra Maldonado said. “But people have to understand that whatever you put out there, it’s possible that you could be sitting in a room across from a government agent someday asking you a question about it.”
Ybarra Maldonado said he’s seen immigration processes where someone is appealing to the court that they are a moral, upstanding person, but there are screenshots of them from social media posing with guns or drugs.
Ybarra Maldonado suggests that people applying for citizenship or temporary protections consider keeping their social media pages private, and to only connect with people that they know. He also warns that people who share info about their legal status online can be the target of internet scams, as there’s always someone looking to exploit vulnerable populations.
But maintaining a digital footprint can also be a positive thing for his clients, Ybarra Maldonado said. Printouts from social media can provide evidence of the longevity of someone’s residence in the U.S., or show them as an active participant in their community. It’s also a major way that immigrants stay connected to their families and friends in other countries, and find community in the U.S.
Identifying yourself online
For José Pati?o, a 35-year-old DACA recipient, that goal of staying connected to his community was the reason he eventually began using his full name online.
When he was 6, Pati?o and his mother immigrated from Mexico to join his father in West Phoenix. From the beginning, he said, his parents explained his immigration status to him, and what that meant — he wasn’t eligible for certain things, and at any time, he could be separated from them. If he heard the words “la migra,” or immigration, he knew to find a safe place and hide.
In Pati?o’s neighborhood, he described, an ever-present feeling lingered that the many immigrants living there felt limited and needed to be careful. He realized he could work, but it would always be for less money, and he’d have to keep quiet about anything he didn’t agree with. Most people in his neighborhood didn’t use social media or didn’t identify themselves as “undocumented.”
“You don’t want your status to define your whole identity,” he said. “And it’s something that you don’t want a constant reminder that you have limitations and things that you can’t do.”
But like most millennials, when Pati?o went to college, he discovered that Facebook was the main way of communicating and organizing. He went “back and forth at least 100 times,” over signing up with the social media platform, and eventually made a profile with no identifying information. He used a nickname and didn’t have a profile photo. Eventually, though, he realized no one would accept his friend requests or let him into groups.
“And then little by little, as I became more attuned to actually being public, social media protected me more — my status — than being anonymous,” he said. “If people knew who I was, they would be able to figure out how to support me.”
Pati?o and others interviewed for this story acknowledged that the DACA program is temporary and could change with an incoming federal administration. In his first few months of his presidency in 2017, Donald Trump announced he was rescinding the program, though the Supreme Court later ruled it would stand.
That moment pushed Pati?o toward community organizing. He is now very much online as his full self, as he and his wife, Reyna Montoya, run Phoenix-based Aliento, which aims to bring healing practices to communities regardless of immigration status. The organization provides art and healing workshops, assists in grassroots organizing, and provides resources for undocumented students to get scholarships and navigate the federal student aid form.
Now, Pati?o said, he would have very personal conversations with anyone considering putting themselves and their status online. The community has gained a lot of positive? exposure and community from immigrants sharing their personal experiences, but it can take a toll, he said. His online presence is now an extension of the work he does at Aliento.
“Basically, I want to be the adult that my 17-, 18-year-old-self needed,” he said. “For me, that’s how I see social media. How can I use my personal social media to provide maybe some hope or some resources with individuals who are, right now, maybe seeing loss or are in the same situation that I was in?”
Tobore Oweh, a 34-year-old Nigerian immigrant who arrived in Maryland when she was 7, has spent the last decade talking about her status online. After she received DACA protections in 2012, she felt like it was a way to unburden some of the pressures of living life without full citizenship, and to find people going through similar things.
“That was like a form of liberation and freedom, because I felt like I was suppressing who I was, and it just felt like this heavy burden around immigration and just like, it’s just a culture to be silent or fear,” Oweh said. “And for me, sharing my story at that time was very important to me.”
She connected with others through UndocuBlack, a multi-generational network of current and former undocumented Black people that shares resources and tools for advocacy. Being open about your status isn’t for everyone, she said, but she’s a naturally bold and optimistic person.
She referred to herself as “DACA-mented,” saying she feels she has the privilege of some protection through the program but knows it’s not a long-term solution. She’s never felt “super safe,” but was uneasier through the Trump administration when he made moves to end the program.
“Everyone with DACA is definitely privileged, but you know, we all are still experiencing this unstable place of like, not knowing,” she said.
Since sharing more of her experiences online, Oweh said she feels a lot more opportunities and possibilities came into her life. Oweh moved to Los Angeles seven years ago and runs a floral business called The Petal Effect. She feels safe in California, as the state has programs to protect immigrants from discrimination through employment, education, small businesses and housing.
For Oweh, it was never a question of if she’d use social media, but rather how she would. She feels the accessibility to community and for sharing resources far outweighs the risks of being public about her status.
“Growing up, it wasn’t like what it is now. I feel like, you know, future generations, or you know, the people that are here now, like we have more access to community than I did growing up just off of social media,” Oweh said. “So it’s been instrumental in amplifying our voices and sharing our stories.”
Being vocal about your status isn’t right for everyone, Beleza Chan, director of development and communications for education-focused Immigrants Rising, told States Newsroom.
Social media, student organizing, protests and blogging led to the passing of the DREAM act and DACA in the last two decades, and those movements were essential to immigrants rights today. But those feelings of security come in waves, she said.
“I think the political climate certainly affects that,” Chan said. “…In the previous years, it was ‘undocumented and unafraid,’ and since Trump, it’s been like ‘you’re undocumented and you’re very afraid to speak up.’”
]]>https://www.academytrans.com/2024/09/17/how-immigrants-navigate-their-digital-footprints-in-a-charged-political-climate/feed/0Governments often struggle with massive new IT projects
https://www.academytrans.com/2024/08/30/governments-often-struggle-with-massive-new-it-projects/
https://www.academytrans.com/2024/08/30/governments-often-struggle-with-massive-new-it-projects/#respond[email protected] (Paige Gross)Fri, 30 Aug 2024 09:40:30 +0000https://www.academytrans.com/?p=21328
Government requirements and culture can make upgrading aging computer systems difficult, experts say. (Getty Images)
Idaho’s state government was facing a problem.
In 2018, its 86 state agencies were operating with a mix of outdated, mismatched business systems that ran internal processes like payroll and human resources. Some of the programs dated back to the 1980s, and many were written in programming languages they don’t teach in engineering schools anymore.
The state made a clear choice — one many other state and city governments have made in recent years — they overhauled their entire IT suite with one cloud-based software.
But since the $121 million project, called Luma, rolled out in July 2023, things have not gone as planned.
Luma has created procedural and data errors and caused “disruptions in day-to-day processes and [is] impacting overall productivity,” said an audit that was provided to legislators in June.
Five months into its launch last year, the Luma project was still receiving criticism from employees, organizations that work with the state’s government agencies and from top state legislators.
Speaker of the Idaho House of Representatives Mike Moyle said in a November 2023 Legislative Council meeting that the state might want to come up with an exit plan for the platform — “No offense, this thing is a joke and it’s not working,” he told legislators.
Idaho’s Luma project is just one of many government IT overhauls that hasn’t gone as smoothly as city and state officials may have aimed for.
As few as 13% of large government IT projects succeed, a field guide by the U.S. General Services Administration’s 18F team said. The group of designers, software engineers, strategists and product managers work within the GSA to help government agencies buy and build tech products.
State projects, the org’s report says, can face the most challenges because state departments often don’t have sufficient knowledge about modern software development and their procurement procedures can be outdated for what’s needed to properly vet huge software solutions.
“Every year, the federal government matches billions of dollars in funding to state and local governments to maintain and modernize IT systems used to implement federal programs such as Medicaid, child welfare benefits, housing, and unemployment insurance,” 18F’s State Software Budgeting Handbook said. “Efforts to modernize those legacy systems fail at an alarmingly high rate and at great cost to the federal budget.”
Why are governments overhauling long-standing IT systems?
Most of the time, as in the case of Idaho, a state is seeking to overhaul a series of aging, inflexible and ineffective systems with one more modernized approach.
Each year, governments need to budget and allocate resources to maintain existing systems and to get them to work with other business operation systems. In 2019, 80% of the $90 billion federal IT spending budget went toward maintenance of legacy software.
Giant projects, like Washington state’s proposed $465 million replacement program of its legacy systems, may likely be replacing the millions spent every year to keep up old systems.
Aging software systems aren’t just awkward or inefficient to use, but they can also pose cybersecurity risks. Departments that use systems built with older programming languages that are going out of style will struggle to find employees who can maintain them, experts say. Departments might also struggle to get newer business systems to integrate with older ones, which causes the potential for hiccups in operation.
A closer look at Luma?
Idaho’s State Controller’s Office found itself in that position six years ago when it sought to overhaul all its business operation systems. Scott Smith, the chief deputy controller, and project manager of Luma, said they were trying to maintain systems that they were losing technical support for.
Each agency had built their own homegrown system, or had procured their own up until that point. There was a desire to modernize operations statewide and do an audit on return on investment for taxpayers. The project got the name Luma, an attempt for the state to “enlighten, or shine a light on” its existing systems and update them, Smith said.
After a procurement process, the state chose enterprise resource planning software company Infor, and replaced a collection of separate systems that ran payroll, budgets, financial management and human resources with one cloud-based solution. Many of these legacy systems dated back to 1987 and 1988, and were becoming vulnerable to security threats, Smith said.
Reports by the Idaho Capital Sun found that since its rollout last summer, the new system didn’t correctly distribute $100 million in interest payments to state agencies, it double paid more than $32 million in Idaho Department of Health and Welfare payments, and it created payroll issues or delays for state employees. A nonprofit that works with the state said it wasn’t paid for months, and only received payments when they sought attention from state legislators and local media, and upon launch day in July 2023, only about 50% of employees had completed basic training on the system.
In February, Moyle and a bipartisan group of eight legislators asked an independent, nonpartisan state watchdog agency called the Office of Performance Evaluations to look into Luma’s software. And in June, a Legislative Services Audit found system lacked a range of information technology controls for data validation and security.
The performance evaluation report isn’t due until October, but Ryan Langrill, interim director of the OPE, said in August that they were told to make the Luma study its priority.
“Our goal is to identify what went well and what didn’t and to offer recommendations for future large scale IT projects,” Langrill said.
Smith told States Newsroom that with any large-scale IT project, there’s always going to be difficulties during the first year of implementation. Idaho is the first to do a rollout of this kind, where all business processes went live at once in a multi-cloud environment, he said.
They developed requirements for the system for several years before its rollout last year and spent time in system integration testing with experts from Infor.
“Once you put it into the real world, right? There’s still a lot for you to understand,” Smith said. “And while the system itself can provide you the functionality, there’s still a lot of inherent business processes that need to be adapted to the new system.”
Each agency had to evaluate their own internal processes, Smith said. Large-scale departments like military, transportation and health and human services are going to operate differently than smaller ones like libraries and the historical society. Trying to provide a singular system to support each facet of government is going to come with its challenges, he said.
Human error has also likely played a role in the rollout, Smith said. As employees have to learn the new system and make changes to years-long processes, they’ll have to take time to change, adjust, refine and improve.
Smith said he hopes the Office of Performance Evaluations looks at the Luma project with a “holistic” approach, going back to source selections and analyzing what could have been done better with everything from implementation to the development of requirements for the technology.
“We’ll obviously look at those results and see where we can make improvements, but it can also be used, I hope, as a source document for others…” Smith said. “Every state’s going through a system modernization effort, that they can use to help improve their potential for success in their projects.”
Other challenging rollouts?
A similar situation is brewing in Maine with the rollout of its child welfare system, called Katahdin — named after a mountain in Baxter State Park.
The state sought to overhaul its child welfare database used by the Office of Child and Family Services back in 2019 when its older system began losing functionality, the Maine Morning Star reported. It aimed to “modernize and improve” technical support for staff that work with families, and the department received eight proposals from software companies in 2021, but only three met eligibility criteria.
The state ultimately chose Deloitte, and spent nearly $30 million on the project, which went live in January 2022. But employees say their workflow hasn’t been as effective since.
Caseworkers have described it as cumbersome, saying they need to use dozens of steps and duplicative actions just to complete a single task, and that files saved in the system later go missing. It’s additional stress on a department that faces staff vacancies and long waitlists to connect families with resources, the Maine Morning Star reported in March.
In her annual report in 2023, Christine Alberi, the state’s child welfare ombudsman, wrote “Katahdin is negatively affecting the ability of child welfare staff to effectively do their work, and therefore keep children safe.”
Katahdin, too, received recommendations from a bipartisan oversight committee to improve the system earlier this year. Recommendations included factors beyond just the software, like improvements to the court system, recruiting more staff and addressing burnout.
States Newsroom sought to determine if any of the recommendations had been implemented, and to confirm that the department was still using Katahdin, but the department did not return a request for comment.
A fall 2023 report shows that California has also struggled with the maintenance of its statewide financial system that performs budgeting, procurement, cash management and accounting functions. The program, called FI$Cal, has cost about $1 billion since it began in 2005, and last fall State Auditor Grant Parks said that despite two decades of effort, “many state entities have historically struggled to use the system to submit timely data for the [Annual Comprehensive Financial Report].”
The state, which is famously home to tech capital Silicon Valley, has its own department of technology, which oversees the strategic vision and planning of the state’s tech strategy. But the department landed on the Auditor’s “high risk” list in 2023, with Parks saying the department has not made sufficient progress on its tech projects.
Government v. Corporate tech rollouts
When a government rolls out a new software system, two things are happening, says Mark Wheeler, former chief information officer of the City of Philadelphia. First, they’re replacing a system that’s been around for decades, and second, they’re introducing workers to technology that they may see in their private lives, but aren’t used to operating in a government setting.
Sometimes, he said, governments spend a lot of time planning for the day a system goes live, but don’t think about the long learning curve afterward. They spend years defining functionality and phases of a product, but they don’t designate the real resources needed for “change management,” or the capacity for teams to engage with technologists and become a part of the transition to using the new technology.
Wheeler suggests that departments train new hires in advance of a rollout so certain people can fully focus on the technology transition. Learning these new technologies and building new internal processes can become “a full time job” of its own, Wheeler said. The people who are touch-points for their department with the new systems will also need to form relationships with the software companies they’ve chosen to ease the transition.
Huge software rollouts call follow either an “agile” or “waterfall” approach — agile focuses on continuous releases that incorporate customer feedback, while waterfall has a clearly defined series of phases, and one phase must reach completion before others start.
“We get this message over and over again that government needs to operate like a business, and therefore all of our major technology transformations need to operate in this agile format,” Wheeler said. “Well, if you don’t properly train people and introduce them to agile and create the capacity for them to engage in those two week sprints, that whole agile process starts to fall apart.”
Another way these tech transformations differ between private and public sectors is that there are often project managers at private corporations who oversee the many facets of a project and “own” it from start to finish. Between constant iteration on its improvement, thinking about its long term health, the care and growth of a project, Wheeler says, corporations tend to invest in more people to see transitions through.
Wheeler acknowledged that it can be frustrating for residents to see huge budgets dedicated to government projects that take time to come to fruition and to work smoothly. But his main advice to state or city governments that are on the precipice of a huge change is to invest in the change management teams. When a government is spending potentially hundreds of millions of dollars on a new solution, the tiny budget line of some additional personnel can make or break the success of a project.
And finally, Wheeler says, governments and residents should keep in mind the differing expectations and priorities, between private and public sectors when comparing them.
Tech transformations at large companies are mostly about meeting a bottom line and return on investment, while governments are responsible for the health and safety and stability of their societies. They also require the feedback and inclusion of many, many stakeholders and due process procedures, Wheeler said, and they have to be transparent about their decision-making.
Governments also just aren’t known to be super great with change, he said.
“As much as the public says they want government to move quickly, when you propose a very big change, suddenly everyone wants to question it and make sure that they have their say in the process,” Wheeler said. “And that includes technology pieces so that will slow it all down.”
]]>https://www.academytrans.com/2024/08/30/governments-often-struggle-with-massive-new-it-projects/feed/0Americans’ perception of AI is generally negative, though they see ‘beneficial applications’
https://www.academytrans.com/2024/08/28/americans-perception-of-ai-is-generally-negative-though-they-see-beneficial-applications/
https://www.academytrans.com/2024/08/28/americans-perception-of-ai-is-generally-negative-though-they-see-beneficial-applications/#respond[email protected] (Paige Gross)Wed, 28 Aug 2024 20:37:46 +0000https://www.academytrans.com/?p=21288
A new poll of Americans across nine states by Heartland Forward finds that Americans are generally wary of artificial intelligence but are more positive about the potential in specific economic sectors. (Getty Images)?
A vast majority of Americans feel negatively about artificial intelligence and how it will impact their futures, though they also report they don’t fully understand how and why the technology is currently being used.
The sentiments came from a survey conducted this summer by think tank Heartland Forward, which used Aaru, an AI-powered polling group that uses news and social media to generate respondents.
The poll sought to learn about the perceptions of AI for Americans across different racial, gender and age groups in Alabama, Illinois, Indiana, Louisiana, Michigan, North Dakota, Ohio, Oklahoma and Tennessee. Heartland Forward also held in-person dinners in Fargo, North Dakota and Nashville, Tennessee to collect sentiments.
While more than 75% of respondents reported that they feel skeptical, scared or overall negatively about AI, they reported more positive feelings when they learned about specific uses in industries like health care, agriculture and manufacturing.
Many of the negative feelings were about AI and work, with 83% of respondents saying they think it could negatively impact their job opportunities or career paths. Those respondents said they feel anxious about AI in their industries, and nearly 53% said they feel they should get AI training in the workplace. Louisiana respondents showed the highest level of concerns for job opportunities (91%), with Alabama showing highest levels of workplace anxiety (90%).
Respondents also had huge doubts about AI’s ethical capabilities and data protection, with 87% saying they don’t think AI can make unbiased ethical decisions, and 89% saying it doesn’t have the ability to safeguard privacy.
But when the pollsters told respondents about specific AI uses in health care, agriculture, manufacturing, education, transportation, finance and entertainment, they got positive responses. The majority of respondents believe AI can have “beneficial applications” across numerous industries.
Nearly 79% of respondents felt AI could have a moderate or positive impact on health care, 77% said so about agriculture, manufacturing and education, 80% said so about transportation, 73% said so about finance and 70% said so about entertainment.
Very strong positive feelings about AI were less common, but some states stood out, seeing applications in dominant local industries. North Dakota showed more interest than others when it came to agriculture, with 35% of people seeing “very high” potential, compared to 19% in Oklahoma and 18% in Louisiana.
“It really shows us that one, education is important, and that two, we need to bring the right people around the table to talk about it,” said Angie Cooper, executive vice president of Heartland Forward.
The negative and positive sentiments recorded by the poll found very little variation between the gender, age and racial groups. The negative sentiments of AI’s impact on society were held across the entire political spectrum, too, Cooper said.
Another uniting statistic was that at least 93% of respondents believe that it’s at least “moderately important” for governments to regulate AI.
Cooper said that during the organization’s dinners in Fargo and Nashville — which brought investors, entrepreneurs, business owners and policymakers together — it was clear that people had some understanding of how AI was being used in their sector, but they weren’t aware of policies and regulations introduced at the state level.
Though there’s no federal AI legislation, so far this year, 11 new states have enacted laws about how to use, regulate or place checks and balances on AI. There are now 28 states with AI legislation.
“The data shows, and the conversations that we’ve had in Fargo and Nashville really are around that there’s still a lack of transparency,” Cooper said. “And so they believe policy can help play a role there.”
]]>https://www.academytrans.com/2024/08/28/americans-perception-of-ai-is-generally-negative-though-they-see-beneficial-applications/feed/0Where exactly are all the AI jobs?
https://www.academytrans.com/2024/08/26/where-exactly-are-all-the-ai-jobs/
https://www.academytrans.com/2024/08/26/where-exactly-are-all-the-ai-jobs/#respond[email protected] (Paige Gross)Mon, 26 Aug 2024 09:45:37 +0000https://www.academytrans.com/?p=21177
(Stanford University graphic)
The desire for artificial intelligence skills in new hires has exploded over the last five years, and continues to be a priority for hiring managers across nearly every industry, data from Stanford University’s annual AI Index Report found.
In 2023, 1.6% of all United States-based jobs required AI skills, a slight dip from the 2% posted in 2022. The decrease comes after many years of growing interest in artificial intelligence, and is likely attributed to hiring slowdowns, freezes or layoffs at major tech companies like Amazon, Deloitte and Capital One in 2023, the report said.
The numbers are still greatly up from just a few years ago, and in 2023, thousands of jobs across every industry required AI skills.
What do those AI jobs look like? And where are they based, exactly?
Generative AI skills, or the ability to build algorithms that produce text, images or other data when prompted, were sought after most, with nearly 60% of AI-related jobs requiring those skills. Large language modeling, or building technology that can generate and translate text, was second in demand, with 18% of AI jobs citing the need for those skills.
Those skills were followed by ChatGPT knowledge, prompt engineering, or training AI, and two other specific machine learning skills.
The industries that require these skills run the gamut — the information industry ranked first with 4.63% of jobs while professional, scientific and technical services came in second with 3.33%. The financial and insurance industries followed with 2.94%, and manufacturing came in fourth with 2.48%.
Public administration jobs, education jobs, management and utilities jobs all sought AI skills in 1- 2% of their open roles, while agriculture, mining, wholesale trade, real estate, transportation, warehousing, retail trade and waste management sought AI skills in 0.4-0.85% of their jobs.
Though AI jobs are concentrated in some areas of the country, nearly every U.S. state had thousands of AI-specific jobs in 2023, the report found.
California — home to Silicon Valley — had 15.3%, or 70,630 of the country’s AI-related jobs posted in 2023. It was followed by Texas at 7.9%, or 36,413 jobs. Virginia was third, with 5.3%, or 24,417 of AI jobs.
Based on population, Washington state had the highest percentage of people in AI jobs, with California in second, and New York in third.
Montana, Wyoming and West Virginia were the only states with fewer than 1,000 open roles requiring AI, but because of population sizes, AI jobs still made up 0.75%, 0.95% and 0.46% of all of the state’s open roles last year.
Though the number of jobs dipped from 2022 to 2023, the adoption of AI technologies across business operations has not. In 2017, 20% of businesses reported that they had begun using AI for at least one function of their work. In 2022, 50% of businesses said they had, and that number reached 55% in 2023.
For those that have incorporated AI tools into their businesses, it’s making their workers more productive, the report found. The report said studies have shown that AI tools have allowed workers to complete tasks more quickly and have improved the quality of their work. The research suggested that AI could be also capable of upskilling workers, the report found.
The report acknowledges that with all the technological advances that the AI industry has seen in the last five years, there are still many unknowns. The U.S. is still awaiting federal AI legislation, while states make their own regulations and laws.
The Stanford report predicts two futures for the trajectory of the technology — one in which the technology continues to develop and increase productivity, but there’s a possibility that it’s used for “good and bad uses.” In another future, without proper research and development, the adoption of AI technologies could be constrained, researchers said.
“They are stepping in to encourage the upside,” the report said of government bodies. “Such as funding university R&D and incentivizing private investment. Governments are also aiming to manage the potential downsides, such as impacts on employment, privacy concerns, misinformation, and intellectual property rights.”
]]>https://www.academytrans.com/2024/08/26/where-exactly-are-all-the-ai-jobs/feed/0AI will play a role in election misinformation. Experts are trying to fight back.
https://www.academytrans.com/2024/08/20/ai-will-play-a-role-in-election-misinformation-experts-are-trying-to-fight-back/
https://www.academytrans.com/2024/08/20/ai-will-play-a-role-in-election-misinformation-experts-are-trying-to-fight-back/#respond[email protected] (Paige Gross)Tue, 20 Aug 2024 09:00:55 +0000https://www.academytrans.com/?p=20916
The rapid advancement of artificial intelligence technology has made it easier to create believable but totally fake videos and images and spread misinformation about elections, experts say. (Tero Vesalainen/Getty Images)
In June, amid a bitterly contested Republican gubernatorial primary race, a short video began circulating on social media showing Utah Gov. Spencer Cox purportedly admitting to fraudulent collection of ballot signatures.
The false video was part of a growing wave of election-related content created by artificial intelligence. At least some of that content, experts say, is false, misleading or simply designed to provoke viewers.
AI-created likenesses, often called “deepfakes,” have increasingly become a point of concern for those battling misinformation during election seasons. Creating deepfakes used to take a team of skilled technologists with time and money, but recent advances and accessibility in AI technology have meant that nearly anyone can create convincing fake content.
“Now we can supercharge the speed and the frequency and the persuasiveness of existing misinformation and disinformation narratives,” Tim Harper, senior policy analyst for democracy and elections at the Center for Democracy and Technology, said.
AI has advanced remarkably since just the last presidential election in 2020, Harper said, noting that OpenAI’s release of ChatGPT in November 2022 brought accessible AI to the masses.
About half of the world’s population lives in countries that are holding elections this year. And the question isn’t really if AI will play a role in misinformation, Harper said, but rather how much of a role it will play.
How can AI be used to spread misinformation?
Though it is often intentional, misinformation caused by artificial intelligence can sometimes be accidental, due to flaws or blindspots baked into a tool’s algorithm. AI chatbots search for information in the databases they have access to, so if that information is wrong, or outdated, it can easily produce wrong answers.
OpenAI said in May that it would be working to provide more transparency about its AI tools during this election year, and the company endorsed the bipartisan Protect Elections from Deceptive AI Act, which is pending in Congress.
“We want to make sure that our AI systems are built, deployed, and used safely,” the company said in the May announcement. “Like any new technology, these tools come with benefits and challenges. They are also unprecedented, and we will keep evolving our approach as we learn more about how our tools are used.”
Poorly regulated AI systems can lead to misinformation. Elon Musk was recently called upon by several secretaries of state after his AI search assistant Grok, built for social media platform X, falsely told users Vice President Kamala Harris was ineligible to appear on the presidential ballot in nine states because the ballot deadline had passed. The information stayed on the platform, and was seen by millions, for more than a week before it was corrected.
“As tens of millions of voters in the U.S. seek basic information about voting in this major election year, X has the responsibility to ensure all voters using your platform have access to guidance that reflects true and accurate information about their constitutional right to vote,” reads the letter signed by the secretaries of state of Washington, Michigan, Pennsylvania, Minnesota and New Mexico.
Generative AI impersonations also pose a new risk to the spread of misinformation.? In addition to the fake video of Cox in Utah, a deepfake video of Florida Governor Ron DeSantis falsely showed him dropping out of the 2024 presidential race.
Some misinformation campaigns happen on huge scales like these, but many others are more localized, targeted campaigns. For instance, bad actors may imitate the online presence of a neighborhood political organizer, or send AI-generated text messages to listservs in certain cities. Language minority communities have been harder to reach in the past, Harper said, but generative AI has made it easier to translate messages or target specific groups.
For example, someone could use data about local polling places and public phone numbers to create messages specific to you. They may send a text the night before election day saying that your polling location has changed from one spot to another, and because they have your original polling place correct, it doesn’t seem like a red flag.
“If that message comes to you on WhatsApp or on your phone, it could be much more persuasive than if that message was in a political ad on a social media platform,” Harper said. “People are less familiar with the idea of getting targeted disinformation directly sent to them.”
Verifying digital identities
The deepfake video of Cox helped spur a partnership between a public university and a new tech platform with the goal of combating deepfakes in Utah elections.
From July 2024, through Inauguration Day in January 2025, students and researchers at the ?Gary R. Herbert Institute for Public Policy and the Center for National Security Studies at Utah Valley University will work with SureMark Digital. Together, they’ll verify digital identities of politicians to study the impact AI-generated content has on elections.
Through the pilot program, candidates seeking one of Utah’s four congressional seats and the open senate seat will be able to authenticate their digital identities at no cost through SureMark’s platform, with the goal of increasing trust in Utah’s elections.
Brandon Amacher, director of the Emerging Tech Policy Lab at UVU, said he sees AI playing a similar role in this election as the emergence of social media did in the 2008 election — influential but not yet overwhelming.
“I think what we’re seeing right now is the beginning of a trend which could get significantly more impactful in future elections,” Amacher said.
In the first month of the pilot, Amacher said, the group has already seen how effective these simulated video messages can be, especially in short-form media like TikTok and Instagram Reels. A shorter video is easier to fake, and if someone is scrolling these platforms for an hour, a short clip of misinformation likely won’t get very much scrutiny, but it could still influence your opinion about a topic or a person.
SureMark Chairman Scott Stornetta explained that the verification platform, which rolled out in the last month, allows a user to acquire a credential. Once that’s approved, the platform goes through an authorization process of all of your published content using cryptographic techniques that bind the identity of a person to the content that features them. A browser extension then identifies to users if content was published by you or an unauthorized actor.
The platform was created with public figures in mind, especially politicians and journalists who are vulnerable to having their images replicated. Anyone can download the SureMark browser extension to see accredited content across different media platforms, not just those that get accredited. Stornetta likened the technology to an X-ray.
“If someone sees a video or an image or listens to a podcast on a regular browser, they won’t know the difference between a real and a fake,” he said. “But if someone that has this X-ray vision sees the same documents in their browser, they can click on a button and basically find out whether it’s a green check or red X.”
The pilot program is currently working to credential the state’s politicians, so it will be a few months before they start to glean results, but Justin Jones, the executive director of the Herbert Institute, said that every campaign they’ve connected with has been enthusiastic to try the technology.
“All of them have said we’re concerned about this and we want to know more,” Jones said.
What’s the motivation behind misinformation?
Lots of different groups with varying motivations can be behind misinformation campaigns, Michael Kaiser, CEO of Defending Digital Campaigns, told States Newsroom.
There is sometimes misinformation directed at specific candidates, like in the case of Governors Cox and DeSantis’ deepfake videos. Campaigns around geopolitical events, like wars, are also common to sway public opinion.
Russia’s influence on the 2016 and 2020 elections is well-documented, and efforts will likely continue in 2024, with a goal of undermining U,S, support of Ukraine, a Microsoft study recently reported.
There’s sometimes a monetary motivation to misinformation, Amacher said, as provocative, viral content can turn into payouts on platforms that pay users for views.
Kaiser, whose work focuses on providing cybersecurity tools to campaigns, said that while interference in elections is sometimes the goal, more commonly, these people are trying to cause a general sense of chaos and apathy toward the elections process.
“They’re trying to divide us at another level,” he said. “For some bad actors, the misinformation and disinformation is not about how you vote. It’s just that we’re divided.”
It’s why much of the AI-generated content is inflammatory or plays on your emotions, Kaiser said.
“They’re trying to make you apathetic, trying to make you angry, so maybe you’re like, ‘I can’t believe this, I’m going to share it with my friends,’” he said. “So you become the platform for misinformation and disinformation.”
Strategies for stopping the spread of misinformation
Understanding that emotional response and eagerness to share or engage with the content is a key tool to slowing the spread of misinformation. If you’re in that moment, there’s a few things you can do, the experts said.
First, try to find out if an image or sound bite you’re viewing has been reported elsewhere. You can use reverse image search on Google to see if that image is found on reputable sites, or if it’s only being shared by social media accounts that appear to be bots. Websites that fact check manufactured or altered images may point you to where the information originated, Kaiser said.
If you’re receiving messages about election day or voting, double check the information online through your state’s voting resources, he added.
Adding two-factor authentication on social media profiles and email accounts can help ward off phishing attacks and hacking, which can be used to spread? misinformation, Harper said.
If you get a phone call you suspect may be AI-generated, or is using someone’s voice likeness, it’s good to confirm that person’s identity by asking about the last time you spoke.
Harper also said that there’s a few giveaways to look out for with AI-generated images, like an extra finger or distorted ear or hairline. AI has a hard time rendering some of those finer details, Harper said.
Another visual clue, Amacher said, is that deepfake videos often feature a blank background, because busy surroundings are harder to simulate.
And finally, the closer we are to the election, the likelier you are to see misinformation, Kaiser said. Bad actors use proximity to the election to their advantage — the closer you are to election day, the less time your misinformation has to be debunked.
Technologists themselves can take some of the onus of misinformation in the way they build AI, Harper said. He recently published a summary of recommendations for AI developers with suggestions for best practices.
The recommendations included refraining from releasing text-to-speech tools that allow users to replicate the voices of real people, refraining from the generation of realistic images and videos of political figures and prohibiting the use of generative AI tools for political ads.
Harper suggests that AI tools disclose how often a chatbot’s training data is updated relating to election information, develop machine-readable watermarks for content and promote authoritative sources of election information.
Some tech companies already voluntarily follow many of these transparency best practices, but much of the country is following a “patchwork” of laws that haven’t developed at the speed of the technology itself.
A bill prohibiting the use of deceptive AI-generated audio or visual media of a federal candidate was introduced in congress last year, but it has not been enacted. Laws focusing on AI in elections have been passed on a state level in the last two years, though, and primarily either ban messaging and images created by AI or at least require specific disclaimers about the use of AI in campaign materials.
But for now, these young tech companies that want to do their part in stopping or slowing the spread of misinformation can seek some direction from the CDT report or pilot programs like UVU’s.
“We wanted to take a stab at creating a kind of a comprehensive election integrity program for these companies,” Harper said. “understanding that unlike the kind of legacy social media companies, they’re very new and quite young and have no time or kind of the regulatory scrutiny required to have created strong election integrity policies in a more systematic way.”
]]>https://www.academytrans.com/2024/08/20/ai-will-play-a-role-in-election-misinformation-experts-are-trying-to-fight-back/feed/0IT glitch causing delays in flights, business operations globally
https://www.academytrans.com/2024/07/19/it-glitch-causing-delays-in-flights-business-operations-globally/
https://www.academytrans.com/2024/07/19/it-glitch-causing-delays-in-flights-business-operations-globally/#respond[email protected] (Paige Gross)Fri, 19 Jul 2024 15:31:55 +0000https://www.academytrans.com/?p=20083
Long queues of passengers form at the check-in counters at Ninoy Aquino International Airport, amid a global IT disruption caused by a Microsoft outage and a Crowdstrike IT problem, on July 19, 2024 in Manila, Philippines. A significant Microsoft outage impacted users globally, leading to widespread disruptions, including cancelled flights and disruptions at retailers globally. Airlines like American Airlines and Southwest Airlines reported difficulties with their systems, which rely on Microsoft services for operations. The outage affected check-in processes and other essential functions, causing frustration among travellers and lines to back up at many affected airports worldwide. (Photo by Ezra Acayan/Getty Images)
Editor’s note: The global glitch also delayed delivery of the Morning Lantern, our daily emailed newsletter, by 19 minutes. Not getting our free newsletter? Subscribe here
Air travel, banking, media and hospital systems are just some of the industries affected by a bug in a software update that has scrambled business operations for many globally Friday morning.
Many of those who use Microsoft Windows are likely experiencing a “blue screen of death” or an error page. The issue is due to a single bug in a software update from cybersecurity company CrowdStrike, which provides antivirus software for Microsoft users.
The company pushed out an update to the software overnight, and at 1:30 a.m. EST, CrowdStrike said its “Falcon Sensor” software was causing Microsoft Windows to crash and display a blue screen, Reuters reported.
CrowdStrike President and CEO George Kurtz released a statement early Friday morning on X, saying that the incident was not a security concern or a cyberattack. He added that the issue has been identified and that the company has been deploying a fix.
“We refer customers to the support portal for the latest updates and will continue to provide complete and continuous updates on our website,” Kurtz said.
The bug was causing major delays and cancellations at airports across the globe. Flight tracking data site FlightAware noted nearly 24,000 delays and 2,300 cancellations globally by 9:30 a.m. Friday. While some airlines have been able to resume operation of their digital systems, others are finding analogue solutions in the meantime.
The U.S. Department of Transportation said it was monitoring the situation and suggested those experiencing travel delays and cancellations to use its FlightRights.gov website to help navigate their delays in travel.
Some states’ 911 and non-emergency lines were experiencing issues, including Alaska, Virginia and New Jersey.
New Jersey Governor Phil Murphy released a statement early Friday morning saying that the state had activated its State Emergency Operations Center in response to the disruptions and has provided guidance to other agencies about how to work through the situation.
“We are also engaging county and local governments, 911 call centers, and utilities to assess the impact and offer our assistance.,” he said.
By 10 a.m. Friday, some global companies were seeing relief in their outages. Downdetector, which tracks real-time outages, showed companies like Visa, Zoom, UPS and Southwest Airlines gaining more normal operations than they were experiencing in the early morning hours.
Speaking to the hosts of Today this morning, Kurtz said he was “deeply sorry for the impact we’ve caused to customers, to travelers, to anyone affected.” He said some customers have been able to reboot and are seeing progress getting online, and that trend will likely continue throughout the day.
Effects from the global IT outage Friday continued to be felt throughout the day, especially by government systems and transportation hubs.
Courts in Massachusetts and New York experienced disrupted service, as court transcription recording systems were not operational in some Massachusetts courthouses, the Associated Press reported.
The Texas Department of Public Safety, which runs its driver’s license offices, also closed their offices for the day, with “no current estimate” on when they will reopen.
Around 4 p.m. EST, Kurtz released more statements on X, reiterating that the outage was not a security breach.
“We understand the gravity of the situation and are deeply sorry for the inconvenience and disruption,” he said. “We are working with all impacted customers to ensure that systems are back up and they can deliver the services their customers are counting on.”
Kurtz said the company is working on a “technical update and root cause analysis” that they will share with customers, and shared a letter that was sent to customers and partners.
“We know that adversaries and bad actors will try to exploit events like this. I encourage everyone to remain vigilant and ensure that you’re engaging with official CrowdStrike representatives. Our blog and technical support will continue to be the official channels for the latest updates,” it said.
“Nothing is more important to me than the trust and confidence that our customers and partners have put into CrowdStrike. As we resolve this incident, you have my commitment to provide full transparency on how this occurred and steps we’re taking to prevent anything like this from happening again,” it continued.
]]>https://www.academytrans.com/2024/07/19/it-glitch-causing-delays-in-flights-business-operations-globally/feed/0Kentucky lawmakers hear how other states are regulating, using artificial intelligence
https://www.academytrans.com/2024/07/09/kentucky-lawmakers-hear-how-others-states-are-regulating-using-artificial-intelligence/
https://www.academytrans.com/2024/07/09/kentucky-lawmakers-hear-how-others-states-are-regulating-using-artificial-intelligence/#respond[email protected] (McKenna Horsley)Tue, 09 Jul 2024 19:39:44 +0000https://www.academytrans.com/?p=19711
Discussions about artificial intelligence during the legislative interim are likely to guide action in Kentucky's next legislative session starting in January.?(Getty Images)
Kentucky lawmakers on the General Assembly’s Artificial Intelligence Task Force learned Tuesday about how states are using artificial intelligence and possible legal frameworks for the technology.
During the task force’s first meeting, lawmakers heard presentations from technology and government experts about the history of artificial intelligence (AI), ways state governments use it and legislation in other states. Discussions held in the interim sessions about AI are likely to impact legislation in Kentucky’s next legislative session starting in January.?
Ryan Harkins, senior director of public policy for Microsoft, told lawmakers about the history of developing artificial intelligence and legislation surrounding it. Harkins said generative AI, which creates texts, images or other content, can be used to summarize texts, go beyond traditional keyword searches, create coding and more.?
Harkins said that while tech companies like Microsoft have adopted ethical principles to guide its AI use, some bad actors may exploit AI. That’s where laws come in.?
“We need law and regulation to play its appropriate role, to ensure that everyone in the marketplace — that everyone in the ecosystem — is abiding by certain basic safety and security standards to ensure that we are mitigating any potential risks of harm,” Harkins said.?
That ongoing “robust conversation” about what the rules should be includes those in the technology industry, policymakers, elected officials, academics and other members of civil society, Harkins added.?
Doug Robinson, the executive director of the National Association of State Chief Information Officers (NASCIO), spoke about the various ways that states currently use AI in their operations, such as translating government websites into multiple languages. In some cases, using AI saves time to complete these once “laborious” tasks.
Robinson said legislation about AI was introduced in 40 states this year. During Kentucky’s legislative session, two Lexington lawmakers — Republican Sen. Amanda Mays Bledsoe and Democratic Caucus Chair Sen. Reggie Thomas — sponsored a bill aimed at limiting the use of “deep fakes” or deceptive AI to influence elections in Kentucky. The legislation died in a House committee. The bill would have allowed political candidates appearing in manipulated digital media to bring legal action against the sponsor of the media.
Bledsoe is a co-chair of the AI Task Force and Thomas is a member of it.?
The issue of AI interference in elections has also come up at the federal level. In May, the U.S. Senate Rules Committee advanced three bills that would address the use of AI in elections.?
States surveyed by NASCIO reported concerns about the use of AI for disinformation, biased outputs from AI, inadequate data privacy and security.
“Without that understanding, without a policy framework, without a set of enterprise directives coming out of the office of the CIO (chief information officer), which we’ve seen from many states, then you have to be concerned about how these tools are being used, and what the impact might be on the citizens and the actual trust in government,” Robinson said.?
The lawmakers on the committee had several questions about the advancement of AI and how it can be regulated at the state level.?
Rep. Josh Bray, R-Mount Veron, who is also a co-chair of the task force, asked if AI could be used to improve state government functions in cases like addressing the unemployment services backlog built up during the coronavirus pandemic. He also raised concerns about fraud increasing under an AI system.?
Robinson said he viewed that as a “double-edged sword,” as AI is being used in some cases to detect fraud, but adversaries could use generative AI in cybersecurity attacks.?
“I think states in the future will be deploying these capabilities to reduce fraud,” he added.?
The task force’s next meeting is scheduled for Tuesday, Aug. 13. Bledose said lawmakers on the task force have a lot to discuss during the interim session.?
“This is not a small topic, as you can tell, and it has widespread implications in the private and public sectors, and I think we’re going to do our best to be mindful of both,” she said.?
]]>https://www.academytrans.com/2024/07/09/kentucky-lawmakers-hear-how-others-states-are-regulating-using-artificial-intelligence/feed/0Driving surge in demand for power, data centers eye Kentucky
https://www.academytrans.com/2024/07/09/driving-surge-in-demand-for-power-data-centers-eye-kentucky/
https://www.academytrans.com/2024/07/09/driving-surge-in-demand-for-power-data-centers-eye-kentucky/#respond[email protected] (Liam Niemeyer)Tue, 09 Jul 2024 09:50:13 +0000https://www.academytrans.com/?p=19670
An Amazon Web Services data center under construction in Stone Ridge, Virginia, on March 27, 2024. Amazon plans to spend almost $150 billion in the coming 15 years on data centers, giving the cloud-computing giant the capacity to handle an expected explosion in demand for artificial intelligence applications and other digital services. (Photo by Nathan Howard/Bloomberg via Getty Images)
LOUISVILLE — The boom in artificial intelligence is fueling a proliferation of new data centers? — the computer clusters that power the internet — in “places that maybe we hadn’t thought of before,” an industry spokesman told utility regulators gathered in Louisville last month.
Kentucky could be one of those places in the not-so-distant future.
The state’s largest utility and the legislature have taken steps to attract the investments — potentially billions of dollars — that come when companies such as Microsoft, Google and Amazon develop data centers. The data center boom is also fuelingsurging demand for power at a time when climate change has upped the pressure to reduce heat-trapping emissions from burning fossil fuels.
In an earnings call in May, the CEO of the parent company of Louisville Gas and Electric and Kentucky Utilities said it is “actively working with several large data centers” in Kentucky. PPL Corp. CEO Vince Sorgi said the prospective centers would each need 300 megawatts to 500 megawatts of electricity.
For comparison, one prospective data center could consume the entire power generation of one of LG&E and KU’s coal-fired units. E.W. Brown Generating Station, the utility’s coal-fired power plant in Mercer County, has a net power capacity of 457 megawatts.?
Kentucky lawmakers are among those who see economic potential in these large-scale computer hubs. The GOP-dominated state legislature earlier this year sweetened the enticements for data centers to locate specifically in Jefferson County through House Bill 8, a broad tax policy law passed over the veto of Democratic Gov. Andy Beshear.?
HB 8 gives sales tax breaks on data center equipment if a data center owner or operator makes a capital investment of at least $450 million or a “project organizer” invests at least $150 million. Those tax breaks would need approval from the state’s incentives board, the Kentucky Economic Development Finance Authority.
The Kentucky Lantern requested an interview with Rep. Jason Petrie, R-Elkton, the chair of the Kentucky House Appropriations and Revenue Committee and the leading sponsor of HB 8. In response, Petrie in a statement said the incentives in HB 8 were “consistent with our ongoing efforts to make sure Kentucky remains competitive as we continue to explore potential economic investments.”
WDRB reported in April that Louisville Mayor Craig Greenberg said he was excited about a “transformative” economic development project involving a data center that could locate in the southwestern part of the city but declined to discuss additional details about the project.
?The prospect of attracting data centers to Kentucky, however, also raises concerns among advocates for the environment who follow utility policy: Would Kentucky consumers be forced to shoulder the financial burden of building new transmission lines and power plants to supply data centers with power? Does the state have enough clean energy to attract data center companies that want access to it?
Randy Strobo, a Louisville attorney focusing on environmental issues and litigation, said state and federal governments ultimately have the responsibility to analyze how new data centers would impact Kentucky’s regional electric grid and local communities “from all different perspectives.”?
“There’s going to be other impacts other than just energy,” Strobo said, mentioning how some data centers use large amounts of water to cool computers. “They really need to weigh all the different costs and benefits and try to balance it out in a way that helps more people than hurts them.’
What are data centers — and what can they bring?
In addition to the highly publicized boom in AI services, other factors are also driving the surge in new computer hubs, including the demand for “cloud” storage space and computation power along with a slew of data-driven enterprises across the globe.?
Josh Levi, the industry spokesperson and president of the Data Center Coalition who addressed the Louisville conference, put growing data usage in simpler terms: Sending emails. Using search engines. Streaming video. Credit card transactions. Sending high-definition medical records to help make diagnoses in health care.
“We’re doing it in more places: our home, our office, our home office, the plane, the train. We’re doing it all hours in a day,” Levi told the Louisville audience. “It seems like we’re fairly ubiquitous in our command to generate data, and our companies are very much responding to that.”?
The payoff, at least in terms of capital investment and potential tax revenue, can be significant for communities and states that have the power, internet connection and workforce.?
Communities in Northern Virginia — a nucleus of data center development? — couldreap tens of millions of dollars from local taxes on the operations. Critics, however, say that Virginia’s state tax breaks for data centers generally cancel out any new revenue brought in by local governments.?
States outside the data center hubs of Virginia and Atlanta have already benefited from the boom. In Mississippi, Amazon is spending about $10 billion to build two data center campuses. In Southern Indiana across the Ohio River from Louisville, Facebook’s parent company Meta is investing $800 million in a data center.?
Levi in a statement to the Lantern also asserted data centers have a job multiplier effect beyond their direct employment, pointing to a Data Center Coalition-commissioned report by consulting group PricewaterhouseCoopers that found that nationally each data center job supports six jobs elsewhere in the broader economy.?
“By prioritizing investments in local communities, data centers also boost supply chain and service ecosystems, creating jobs for thousands of construction professionals during the building phase and providing quality, high-wage jobs for ongoing operations,” Levi said in his statement. “Further, every data center comes with years of reliable support for local economies by promoting job creation at restaurants, hotels, rental car agencies, fiber and HVAC installers, steel fabricators, and many other businesses.”
What do data centers need — and at what cost?
Levi said data center developers are looking for fiber internet connections, a workforce to build and run the centers and places less at risk for natural disasters.
They also need power, and lots of it.
According to a January 2024 report from consulting firm McKinsey and Co., electricity demand by data centers in the United States is expected to go from 17 gigawatts in 2022 to 35 gigawatts in 2030. For comparison, Kentucky’s net summer capacity — the maximum amount of electricity produced in the state during peak summer electricity demand — in 2022 was 17.6 gigawatts, according to federal data.
“Reliable power is incredibly important to the data center industry,” Levi said at the conference. “This is an industry who is relied upon to provide non-stop access to the data.”?
But in weighing potential investments and jobs, some environmental advocates in Kentucky worry about who would pay for new transmission lines and power plants if they’re needed to run data centers.?
“As these new data centers are coming online, how are we going to pay for them?” said Strobo, the environmental attorney. He questions whether states should offer “huge incentives” to data centers given the possibility that the costs to accommodate them could fall on electricity ratepayers.?
Two utilities, including American Electric Power, the parent company of Kentucky Power, are protesting before a federal regulator an agreement between Amazon and an independent power producer to use electricity from a Pennsylvania nuclear plant because of concerns that up to $140 million in electricity transmission costs for the agreement could be shifted onto ratepayers. Talen Energy, the independent power producer, is pushing back against AEP’s protest.
Levi told the Lantern that access to clean energy is a consideration for where data centers locate. For example, Amazon is helping pay a local utility in Mississippi to build solar farms to pair with its new data center campuses but will also power the data centers with a new natural gas-fired turbine.?
Lane Boldman, the executive director for the environmental advocacy group Kentucky Conservation Committee, says Kentucky must build more renewable energy to compete for new industries, including data centers and an aluminum smelter, which prioritize climate-friendly technology.?
“We just haven’t taken the time to build out the power. But we have the potential to build up the power. We certainly have the right mix of ingredients to do that. I mean, that’s just simply a political problem to work through,” Boldman said.?
The unique circumstances of data centers
John Bevington, senior director of business and development at LG&E and KU, says manufacturing companies looking to locate in Kentucky usually ask about the availability of land and nearby railroads or highways.?
Not so with data centers.?
“They’re so energy intensive, they tend to start asking questions of utilities first,” Bevington told the Lantern. “They really need proximity to power lines and power lines that have capacity.”?
It’s unclear at this point whether LG&E and KU would seek to build more power generation, fossil fuel-fired or renewables, to meet the demands of prospective data centers coming into the state.?
Chris Whelan, a spokesperson for LG&E and KU, told the Lantern that the potential power demands of data centers will be analyzed in the utility’s future energy planning documents to be filed later this year with the Kentucky Public Service Commission, the state’s utility regulator.?
Bevington with LG&E and KU said the higher energy demands of prospective data centers coincide with the overall higher energy demands of new economic development projects in Kentucky, including battery plants being built by Ford. For the time being, Bevington said, the number of existing data centers in LG&E and KU’s territory is “pretty minimal.”
Strobo, the environmental lawyer, says tech companies like Google appear to be a safer economic development bet than cryptocurrency mining operations that are similarly energy-intensive. The PSC last year in multiple? cases denied or approved electricity cost discounts sought by utilities to serve Bitcoin mining operations across Kentucky.
“It seems like Google and Microsoft and all of them are being intentional about trying to do it in a way that tries to minimize impacts, although, of course, there’s still going to be some pretty major ones from all of it,” Strobo said. “We all know they’re coming. Everybody wants them to come for the most part.”
Bloomberg News recently reported the parent company of Google, which has seen its heat-trapping carbon emissions rise due to its investment in AI, is no longer claiming its operations are carbon neutral, and the company plans to be carbon neutral by 2030. Google also recently announced an investment into solar power in Taiwan among other renewable energy endeavors.
While data centers may not create a large number of jobs, Bevington of LG&E and KU said, the tax revenue brought in could be a boost to communities.?
“Whether that’s a data center or a new automotive supplier or an electric vehicle battery manufacturer or a new bourbon distillery, I think we really tend to look at all of it as, ‘What can we do to enable growth in our communities?’” Bevington said. “We sort of look at data centers as, you know, another very competitive project.”?
]]>https://www.academytrans.com/2024/07/09/driving-surge-in-demand-for-power-data-centers-eye-kentucky/feed/0Feds OK Kentucky plans to roll out $1 billion for broadband. Here’s what happens next.
https://www.academytrans.com/briefs/feds-ok-kentucky-plans-to-roll-out-1-billion-for-broadband-heres-what-happens-next/
[email protected] (Liam Niemeyer)Mon, 17 Jun 2024 22:18:01 +0000https://www.academytrans.com/?post_type=briefs&p=18915
One of the next steps in deploying the money is ensuring the accuracy of Kentucky's broadband access maps. (John Lamb/Getty Images)
Federal officials have approved Kentucky’s plan to deploy almost $1.1 billion?to expand broadband, a key step toward connecting homes and businesses throughout the state.?
The funding was given to the state last year. Earlier this month the National Telecommunications and Information Administration approved the second volume of Kentucky’s proposed plans for using the money through the federal Broadband Equity, Access and Deployment Program, or BEAD, created by the Bipartisan Infrastructure Law.?
Those approved plans include how grants will be awarded to internet providers, what affordability mechanisms will be available to help Kentuckians pay for internet service once it’s expanded, and how workers will be trained to help build broadband connections.?
Kentucky will require internet providers applying for the funding to offer a “low-cost” option, which would be $30 a month or less, though internet providers could negotiate the price of that “low cost” option to a max of $65 a month. No charges for installation, maintenance or repairs are to be allowed in the monthly cost.?
Kentucky Gov. Andy Beshear in a Monday press conference said the internet expansion spurred by the funding “should provide a route” for more Kentuckians to have affordable internet.?
“If broadband and high speed internet is just as important as roads and bridges, then everybody needs to be able to use it. So, affordability is absolutely critical,” Beshear said.?
The next steps to roll out Kentucky’s funding include a process for ensuring the accuracy of the state’s broadband access maps. Meghan Sandfoss, the executive director of the Kentucky Office of Broadband Development, said her office has received more than 400,000 challenges to the maps from internet providers, nonprofits and local governments, and the challenge process should finish by July.? Accurate maps will better identify underserved and unserved parts of the state and make sure broadband expansion isn’t duplicative, Sandfoss said.?
From there, the state has less than five years to distribute broadband grants and build internet connection. Sandfoss said the state has until 2028 to distribute the more than $1 billion in BEAD funding. She said broadband projects underway that were previously funded by the state with hundreds of millions of dollars through the American Rescue Plan Act have to be finished by the end of 2026.
Sandfoss said about 12% of Kentucky is either underserved or unserved by internet providers,? according to federal data, and the BEAD funding should “close the gap all the way.”?
“There’s quite a lot of activity going on right now, and it will continue for the next four years,” Sandfoss said.
]]>Some providers will offer low-cost internet even as federal program ends, White House says
https://www.academytrans.com/2024/06/03/some-providers-will-offer-low-cost-internet-even-as-federal-program-ends-white-house-says/
https://www.academytrans.com/2024/06/03/some-providers-will-offer-low-cost-internet-even-as-federal-program-ends-white-house-says/#respond[email protected] (Shauneen Miranda)Mon, 03 Jun 2024 09:30:39 +0000https://www.academytrans.com/?p=18483
May 31 was the the official last day of the pandemic-era Affordable Connectivity Program, which has provided up to $30 in discounts on internet bills for eligible families and as much as $75 on qualifying tribal lands.?(Photo by Mayur Kakade/Getty Images)
WASHINGTON — With May 31 marking the official last day of the pandemic-era Affordable Connectivity Program, the Biden administration is spotlighting commitments from over a dozen internet service providers to offer plans at $30 or less to low-income households through 2024.
This comes as Federal Communications Commission Chairwoman Jessica Rosenworcel said the short-term program had to end due to a lack of funding, which both she and President Joe Biden are continuing to urge Congress to restore.
For over 23 million households, the Affordable Connectivity Program has provided up to $30 in monthly discounts on internet bills for eligible families and as much as $75 a month for those on qualifying tribal lands.
“The (Affordable Connectivity Program) filled an important gap that provider low-income programs, state and local affordability programs, and the Lifeline program cannot fully address,” Rosenworcel wrote in letters to congressional leaders on Thursday.
“Millions of ACP households nationwide, and households that may be eligible but have not yet enrolled, are looking to Congress to provide the funding needed to keep the ACP up and running.”
Separately, the Lifeline program provides a $9.25 monthly broadband service benefit for eligible households, according to the FCC.
But the commission said this is not an ACP replacement, and that “??not all ACP households will qualify for Lifeline, and by statute, many ACP providers are not eligible to participate in the Lifeline program.”
Rosenworcel has sent monthly letters to congressional leaders outlining the need for additional funding to keep the low-cost internet program running.
Her additional letters on Thursday went to the chairs and ranking members of House and Senate appropriations panels, including Reps. David Joyce of Ohio and Steny Hoyer of Maryland and Sens. Chris Van Hollen of Maryland and Bill Hagerty of Tennessee.
Rosenworcel also sent another round of letters to the chair and ranking member of the Senate Committee on Commerce, Science, and Transportation, Sens. Maria Cantwell of Washington and Ted Cruz of Texas, and the chair and ranking member of the House Committee on Energy and Commerce, Reps. Cathy McMorris Rodgers of Washington and Frank Pallone of New Jersey.
In her most recent letter, Rosenworcel said it was “regrettable” that the FCC must end the “most successful broadband affordability program in our Nation’s history.”
She highlighted some of the possible impacts of the program ending for many military families and millions of households with school-aged children enrolled in the program.
Additionally, Rosenworcel said “the end of ACP will also impact approximately 3.4 million rural households and over 300,000 households in Tribal areas.”
Meanwhile, the administration said over a dozen providers committed to offering “their current ACP subscribers and other eligible households a high-speed internet plan for $30 per month or less, with no fees and data caps, until the end of 2024.”
In October, Biden asked Congress for $6 billion in a supplemental funding request to keep the ACP funding running through the end of 2024.
]]>https://www.academytrans.com/2024/06/03/some-providers-will-offer-low-cost-internet-even-as-federal-program-ends-white-house-says/feed/0Still awaiting noise relief, some rural Kentuckians point to Arkansas’ new crypto mining law
https://www.academytrans.com/2024/05/10/still-awaiting-noise-relief-some-rural-kentuckians-point-to-arkansas-new-crypto-mining-law/
https://www.academytrans.com/2024/05/10/still-awaiting-noise-relief-some-rural-kentuckians-point-to-arkansas-new-crypto-mining-law/#respond[email protected] (Liam Niemeyer)Fri, 10 May 2024 09:50:09 +0000https://www.academytrans.com/?p=17425
The Artemis Power Tech facility, photographed last year before noise canceling blankets were installed, sits among the Wolfe County woods, power lines connecting it to a nearby substation. (Kentucky Lantern photo by Liam Niemeyer)
Nine months after a suspected cryptocurrency mine moved into her previously quiet part of Wolfe County, Brenda Campbell says noise canceling blankets installed by the operator are not helping and she still doesn’t know where to turn for relief from the constant, intrusive whirring.
Wolfe County Judge-Executive Raymond Banks agrees the noise blankets haven’t been very effective, though he said the data center company told him the noise levels are so low it wasn’t necessary to “even have to put what they put up there.”
“I think the sound will always be there regardless of what they do,” Banks recently told the Lantern.
Campbell and Banks are looking to Frankfort for answers and point to recent action by Arkansas lawmakers as a possible roadmap.
Spurred by rural communities’ complaints about noise and disruption, Arkansas lawmakers passed and Gov. Sarah Huckabee Sanders signed legislation this month requiring cryptocurrency mining operations to obtain state permits and when they operate near homes to implement “noise-reduction” techniques. The new law also bans some foreign governments, most notably China, from owning a cryptocurrency mining operation in Arkansas and restores authority to local governments to pass regulations related to cryptocurrency mines.?
To do this, the Arkansas legislature had to reverse a law that it had passed just last year, in part at the urging of a pro-crypto lobbying group, the Satoshi Action Fund. The earlier law had limited local governments’ authority to regulate noise from the facilities.?
Earlier this year, the Satoshi Action Fund lobbied the Kentucky legislature to pass House Bill 741, which was approved by the House but died in the Senate.
Rural counties without zoning most vulnerable
Crypto mining uses tremendous amounts of electricity to run high-powered computers that solve complex mathematical equations to secure online transactions of cryptocurrencies through a digital ledger called the “blockchain.” Mining companies that, for example, solve these equations for the cryptocurrency Bitcoin are rewarded with Bitcoin itself; one unit in Bitcoin was worth more than $60,000 earlier this month.
Electric fans used to cool the computers can generate a lot of sound.?
The Kentucky sponsor of the industry-sought bill, Rep. Adam Bowling, R-Middlesboro, said the legislation would steer noisy, large-scale cryptocurrency mining operations to industrial parks and other designated places for industry.?
But others said the bill would leave the many rural Kentucky counties that have no zoning laws without protection or recourse from crypto mining noise.
As recently as 2020 a little more than half of Kentucky counties had no planning and zoning offices or boards, according to a presentation that year by the Kentucky Association of Counties. Wolfe County is one of them.?
Bowling appeared before a legislative committee on March 13 with representatives from Satoshi Action Fund and Kentucky Blockchain Council, both groups supporting the bill.?
Bowling told the Lantern that some local governments are reactively trying to “change the rules and kind of patch it together” once cryptocurrency mining operations are already established and that the bill would put regulations in place before operations move into a community.?
“They want to get the rules and the regulations set out on the front end before they make multimillion dollar investments and then things are changed after the fact,” Bowling said.?
An? environmental group expressed concerns at the time that the bill would still allow for intrusive noise pollution from mining operations. Audrey Ernstberger, a lobbyist with the Kentucky Resources Council, told members of the House Banking and Insurance Committee in March that the bill “would selectively override the ability of local communities” to reasonably regulate off-site noise impacts from asset mining operations.
Ernstberger said the bill would have held crypto mines to the community’s most “lenient” noise regulations, instead of a more protective standard, and set no specific noise level parameters.
The bill cleared the Kentucky House with Republican support in a largely party line vote, but upon reaching the Senate was never assigned a committee.
In an interview with the Lantern, Ernstberger said the bill would have blocked communities from implementing more stringent noise or zoning regulations after a cryptocurrency mining operation has moved in. The bill provides no conditions for operating cryptocurrency mining facilities in a county without noise or zoning laws, which Ernstberger said could allow facilities to establish themselves anywhere without zoning.
Eric Peterson, the director of policy for the Satoshi Action Fund, in response to a question from a lawmaker at the committee hearing, said the group doesn’t want large-scale mining operations to set up “next to our house, next to a school, next to a church.”?
“Businesses in industrial zones, they create noise, they use a lot of energy. That’s where we want these things,” Peterson said.
The Satoshi Action Fund didn’t respond to emails requesting an interview about their advocacy on HB 741.
‘Just really no business like it’
In Arkansas, one of the new law’s sponsors, Republican state Sen. Missy Irvin, said the measure will provide “great legal standing” for those opposing nuisance cryptocurrency mines, referencing a legal battle between residents of an Arkansas community and a crypto mine.?
Another Arkansas Republican, Sen. Bryan King, of Green Forest, wanted the legislature to go even further by providing communities more notice of where crypto companies plan to mine. King unsuccessfully tried to require mining companies to file a notice with the state six months before buying or leasing land.
King doesn’t see zoning, which local governments traditionally use to decide where industry and business can locate, as a solution, in part because rural communities like those he represents do not want zoning. He also points to the unique nature of crypto mining.
“There’s just really no business like it,” King told the Lantern. “You may have a mill or industry that may emit noise from 8 to 5, but not 24-7, 365 (days a year).”
‘Just one day it’s there’
In Kentucky, Banks, the Wolfe County judge-executive, remains firmly opposed to zoning or a noise ordinance, fearing such laws could hamper future economic development in a county where many struggle with poverty.?
But Banks told the Lantern he wishes the county had received some notice that Artemis Power Tech’s data center was moving into his county before it powered up.
“We’re clueless — just one day, it’s there. Nobody ever mentioned it to me,” he said.
“I don’t have the answer to it. If I had known the thing was going in before it went in, I could have maybe have done some kind of ordinance to stop it.”
Campbell and her neighbors, which include her daughter, grandson and cousins, are still bothered by the high-pitched noise from the data center that set up shop next to an electrical substation in August 2023. Her efforts to curb the noise haven’t gained traction.?
An Artemis Power Tech representative in an email said the company in January installed “noise canceling blankets” after a “third-party construction site noise control assessment” and “have heard no further complaints since then.”
The representative added the company is “willing to take reasonable measures through open communication to ensure we are a long-term trusted partner here.”?
But Campbell said the blankets “have not helped at all.” She says the noise seems worse on days when the weather is warmer because the cooling fans run harder.
The Artemis Power Tech representative told the Lantern it decided to install noise blankets instead of a physical noise barrier because building such a barrier would require trees to be cut down and “reduce noise absorption effectiveness.”
Banks had told the Lantern in October the noise issue would be fixed with the company promising to install a “noise barrier.”?
Campbell said that what started as her concerns about noise have expanded to broader worries about regulation of the cryptocurrency industry as a whole — who owns and runs cryptocurrency mining operations, how cryptocurrency is being used and more.
“I just feel like that somebody has dropped the ball and that they’re just not paying attention,” Campbell said. “It just seems like the whole situation is being ignored, not only at the local level but also, you know, at the state and national level.”
She applauds what Arkansas is doing to protect local communities.?
“They changed their mind that something needed to be done,” Campbell said.
Lane Boldman, executive director of the Kentucky Conservation Committee, a statewide environmental advocacy group, said Campbell’s situation is an example of a crypto mining operation taking advantage of a rural county’s lack of zoning.??
Boldman noted that solar energy projects must promise to mitigate excessive construction noise as part of a hearing before the Kentucky Public Service Commission.
Cryptocurrency mining operations don’t have any similar process to mitigate local impacts, she said..
“They probably cause just as much noise, if not worse noise in some ways, because it doesn’t stop,” Boldman said.?
]]>https://www.academytrans.com/2024/05/10/still-awaiting-noise-relief-some-rural-kentuckians-point-to-arkansas-new-crypto-mining-law/feed/0Virtual reality could help treat eating disorders, Louisville researchers say
https://www.academytrans.com/briefs/virtual-reality-could-help-treat-eating-disorders-louisville-researchers-say/
[email protected] (Sarah Ladd)Tue, 23 Apr 2024 16:21:26 +0000https://www.academytrans.com/?post_type=briefs&p=16973
Cheri Levinson is an associate professor with the university and the director of Louisville’s Eating Anxiety Treatment (EAT) Lab. (Photo provided)
LOUISVILLE — Kentucky researchers are developing and expanding a virtual reality treatment for eating disorders using a $125,000 grant from theNational Eating Disorders Association, the University of Louisville announced Tuesday.?
About 9% of Americans live with eating disorders, which can lead to a “preoccupation” with food intake, weight, calories and more, according to NEDA. Girls are more likely to have disordered eating, the Lantern has reported.?
Louisville researchers who study this issue want to expand the use of virtual reality (VR) technology that they have developed. This can help patients “face their fears of gaining weight” through simulations, they say. Researchers have already conducted a pilot study on such technology, which UofL says was “effective” in helping patients.??
Need help?
If you or someone you know has an eating disorder, you can get help through the National Eating Disorder Association. Call 800-931-2237 or chat online at nationaleatingdisorder.org.
The Louisville Center for Eating Disorders provides nonemergency services, including outpatient therapy. Visit louisvillecenterforeatingdisorders.com or call 502-205-1114 for more information.?
“Research shows exposure treatment can be really effective in taking back control over these devastating and life-altering fears,” Christina Ralph-Nearman, a UofL College of Arts and Sciences assistant research professor, researcher and co-inventor of the technology, said in a statement. “Our virtual simulation allows people to do that in a safe way.”?
Cheri Levinson, an associate professor with the university and the director of Louisville’s Eating Anxiety Treatment (EAT) Lab, is currently leading several projects to tackle and treat eating disorders.In November, Levinson and her team were awarded an $11.5 million grant from the National Institutes of Health to study treatment options.?
“Despite the high prevalence of eating disorders, there still aren’t many options for treatment and prevention,” Levinson said. “This work will not only create new options by leveraging technology, but open previously unopened doors for treating people on a personal, individual level.”
Levinson wants to use the grant money to expand this new technology to “more inclusive of all body types and sizes, ethnicities, races and gender identities and to further test outcomes in a clinical setting.”?
“Eating disorders don’t just affect one type of person — there are a multitude of factors that can influence them,” she said. “Treatment and prevention options should reflect that full range of experience.”
]]>National privacy standard eyed by Congress for data harvested by big tech companies
https://www.academytrans.com/2024/04/18/national-privacy-standard-eyed-by-congress-for-data-harvested-by-big-tech-companies/
https://www.academytrans.com/2024/04/18/national-privacy-standard-eyed-by-congress-for-data-harvested-by-big-tech-companies/#respond[email protected] (Ashley Murray)Thu, 18 Apr 2024 09:00:28 +0000https://www.academytrans.com/?p=16783
Ava Smithing, director of advocacy for the Young People’s Alliance, testifies before a U.S. House Committee on Energy and Commerce subpanel on April 17, 2024, on several data privacy bills being considered in Congress. (Screenshot from U.S. House Committee on Energy and Commerce)
WASHINGTON — U.S. House members tasked with addressing what happens to loads of user data collected by big tech companies see a “long overdue” opportunity for a national privacy standard, particularly for children and teens.
Lawmakers on a subpanel of the House Committee on Energy and Commerce met Wednesday to hear from advocates and online safety experts on a series of data privacy bills that are drawing rare bipartisan and bicameral support.
The 10 billsdiscussed by six witnesses and members of the Subcommittee on Innovation, Data and Commerce would regulate how data is collected and stored, allow users to opt out of algorithms, and ensure safeguards for minors on the internet.
The hearing came on the heels of widespread bipartisan support for a bill that would force the popular video platform TikTok to split from its Chinese parent company ByteDance. The legislation passed the House in March in a 352-65 vote.
“Today we find ourselves at a crossroads,” said Energy and Commerce Committee Chair Cathy McMorris Rodgers. “We can either continue down the dangerous path we’re on, letting companies and bad actors continue to collect massive amounts of data unchecked, or we can give people the right? to control their information online.”
Washington state lawmakers unite
The Washington Republican’s discussion draft of the American Privacy Rights Act was a focus of the Wednesday hearing.
The bipartisan, bicameral proposal, introduced alongside Senate Committee on Commerce Chair Maria Cantwell, a Washington Democrat, would shrink the amount of data companies can collect, regulate data brokers, allow users to access their own data and request deletion, and empower the Federal Trade Commission and state attorneys general to enforce the policies.
Placing the burden on consumers to read “notice and consent” privacy agreements “simply does not work,” said Energy and Commerce Committee ranking member Frank Pallone of New Jersey.
“By contrast, data minimization limits the amount of personal information entities collect, process, retain and transfer to only what is necessary to provide the products and services being requested by the consumer,” Pallone said, praising provisions in the American Privacy Rights Act.
Rodgers said the “foundational” legislation would protect minors and establish a national standard to quash a “modern form of digital tyranny where a handful of companies and bad actors are exploiting our personal information, monetizing it and using it to manipulate how we think and act.”
One national standard would preempt “the patchwork of state laws, so when consumers and businesses cross state lines, there are consistent rights, protections and obligations,” GOP Rep. Gus Bilirakis of Florida, the subcommittee’s chair, said during his opening remarks.
Seventeen states have enacted their own privacy laws and regulations with another 18 states actively pursuing various pieces of legislation, creating a “complex landscape of state-specific privacy laws,” testified Katherine Kuehn, chief information security officer-in-residence for the National Technology Security Coalition, a cybersecurity advocacy organization.
‘Insecurity as data’
Among the other proposals the panel discussed was an update to the 1998 Children and Teens’ Online Privacy Act, co-sponsored by Michigan Republican Rep. Tim Walberg and Kathy Castor, a Florida Democrat.
The bill aims to ban targeted advertising to children and teens, prohibit internet companies from collecting the data of 13-to-17-year-olds without consent, and require direct notice if data is being stored or transferred outside of the U.S.
Ava Smithing of Nashville, Tennessee, described for the committee her teen years spent on Instagram and the body image issues and eating disorder that ensued after repeated targeted content.
“The companies’ abilities to track engagements, such as the duration of time I looked at a photo, revealed to them what would keep me engaged — my own insecurity,” she testified.
“They stored my insecurity as data and linked it to all my other accounts across the internet. They used my data to infer what other types of content I might ‘like,’ leading me down a pipeline from bikini advertisements to exercise videos to dieting tips and finally to eating disorder content,” Smithing, director of advocacy for the Young People’s Alliance, said.
‘Big tech has failed’
Bilirakis is a sponsor of the similarly named Kids Online Safety Act, along with fellow Reps. Erin Houchin, an Indiana Republican, Washington Democrat Kim Schrier and Castor.
“We know that big tech has failed, ladies and gentlemen, to prioritize the health and safety of our children online, resulting in a significant increase in mental health conditions, suicide and drug overdose deaths. We’ve heard stories over and over and over again in our respective districts,” Bilirakis said.
Bilirakis’ bill would outline a set of harms to children under 17 and require big tech and video game companies to mitigate those harms. The bill also aims to increase parental protections on platforms and commission a study of age verification options.
A companion bill in the U.S. Senate has been introduced by Connecticut Democrat Richard Blumenthal and Tennessee Republican Marsha Blackburn.
Samir C. Jain, of the Center for Democracy and Technology, told the House panel that some proposals, including the Kids Online Safety Act, “while well-intentioned and pursuing an important goal, do raise some concerns.”
“Legislation that restricts access to content because government officials deem it harmful can harm youth and present significant constitutional issues,” said Jain, vice president of policy for the civil liberties advocacy organization.
“Further, requirements or strong incentives to require age verification systems to identify children often require further data collection from children and adults alike, and thereby can undermine privacy and present their own constitutional issues,” Jain testified.
However, Jain praised provisions in the American Privacy Rights Act that would increase transparency into the algorithms employed by large data companies and “prohibit using data in a way that perpetuates or exacerbates discrimination based on protected characteristics such as race, sex, religion, or disability status — whether a Black person looking for a job, a woman seeking a loan to start a business, or a veteran with a disability trying to find housing.”
During questioning, Bilirakis asked each panelist: “Yes or no, do you think this is the best chance we have to getting something done on comprehensive data privacy?”
All witnesses answered yes.
Meta, which owns Instagram, did not respond to a request for comment.
]]>https://www.academytrans.com/2024/04/18/national-privacy-standard-eyed-by-congress-for-data-harvested-by-big-tech-companies/feed/0Open records loopholes die in Kentucky Senate. Attempt to revive anti-DEI bill also fails.
https://www.academytrans.com/2024/04/15/open-records-loopholes-die-in-kentucky-senate-attempt-to-revive-anti-dei-bill-also-fails/
https://www.academytrans.com/2024/04/15/open-records-loopholes-die-in-kentucky-senate-attempt-to-revive-anti-dei-bill-also-fails/#respond[email protected] (McKenna Horsley)Tue, 16 Apr 2024 03:29:08 +0000https://www.academytrans.com/?p=16726
Kentucky Capitol (Arden Barnes)
FRANKFORT — A controversial rewriting of the Kentucky Open Records Act died in the Senate as the 2024 regular session ended Monday, but Republican leaders said lawmakers will revisit the issue during the interim before next year’s session.
An attempt to revive Republican legislation targeting diversity, equity and inclusion (DEI) in higher education also failed on the final day of this year’s session. Republican leaders said that effort also is likely to be renewed during the interim.
“This DEI issue is not going away,” Senate Republican Floor Leader Damon Thayer told reporters. “Most of us feel that there is an issue on our college campuses, but there just weren’t enough votes to move forward with the House version of the bill.”?
Because the session ended Monday, lawmakers would have been unable to override a veto by Democratic Gov. Andy Beshear, who has voiced strong support for DEI initiatives and programs
“There was a question not of whether to do something for DEI, it was a question of what to do,” Senate Republican President Robert Stivers told reporters. “And between the realities of the timing and that the governor could control anything, members felt like it was best to wait and try to work through the process.”?
Open records
The Senate never took up House Bill 509, which open government advocates warned would add loopholes to the Kentucky Open Records Act by limiting searches for public records to only devices or accounts owned by a government agency.?
The bill also said public agencies who found employees violating the bill by using a personal cell phone or email account for official communications may discipline them, but it’s unclear if those records could be publicly disclosed. Additionally, the bill did not describe the consequences an agency may take to discipline violators.
Opponents said the bill would encourage public officials to conduct public business on personal devices to keep it secret.
Thayer said there “wasn’t any energy in the caucus to get it done” because the bill had several floor amendments attached to it.?
Beshear had voiced support for the bill’s changes to the open records law. He has argued that it would allow more records to be disclosed by requiring that public officials be furnished an agency email account that a government agency could easily search.??
Throughout the session, politicians in Frankfort have lauded HB 509 as necessary to protect public officials’ privacy but critics have noted the bill could prevent public records created on private devices from being disclosed. The legislation would not have required agencies to search for public records on personal devices.
Stivers, of Manchester, said he had been in favor of the bill. He gave an example of public officials using a private device to take public calls or emails while also using it for family or personal business.?
“I think it was really the complexity of the issue and the timing of what people wanted to see and how you delineate between personal and private, but making sure that you didn’t cross over the line by having a public phone,” Stivers said.?
The Senate president said he had “no doubt” discussion of Kentucky’s open records law will come up again in the interim.
Thayer, too, said it was “an issue that probably isn’t going away.” He is on the Senate State and Local Government Committee and hopes to urge sponsors of the bill and amendments to work together before next year’s session.?
“It’s such a moving target with modern technology, and the fact that we are part-time citizen legislators, it’s hard not to do some of your government work on this phone,” he said.?
Thayer added he did not have his Legislative Research Commission email on his phone “for a reason,” but it would be “naive” to say he does not do any “government work” on it.?
“It would make me ineffective if I couldn’t, but also I don’t want people to have access to my personal phone,” he said.?
The Open Records Act, enacted in 1974, includes exceptions that shield personal and private information from disclosure.?
Lawmakers in the General Assembly are already shielded from the Kentucky Open Records law. They passed a law in 2021 that made legislative leaders the final arbiter of decisions about disclosing legislative records. Beshear issued a futile veto of the bill at the time.?
Diversity, equity, inclusion
When he originally filed Senate Bill 6, Republican Whip Mike Wilson, of Bowling Green, said the goal was to prevent public postsecondary institutions from requiring employees and students to “endorse a specific ideology or political viewpoint” as part of graduation or hiring practices. The Senate passed his version on party lines.?
The House overhauled Senate Bill 6 to require ending DEI programs and offices at public universities and colleges. Rep. Jennifer Decker, R-Waddy, had amended Wilson’s bill in committee without consulting him.
The Senate refused to concur with the sweeping changes made in the House.
The bill was posted on the Senate’s orders of the day Monday but was never called for debate or a vote.
On Monday, Thayer said “the House was unwilling to work on” Wilson’s bill, and that became an issue for the Senate. He added that the caucus did like a lot in the House version but had “some constitutional concerns.”
“I urged those who are interested in that bill — and it’s a fairly large group — to work together in the interim,” said Thayer, who is not seeking reelection to his Senate seat this year.?
While no anti-DEI legislation crossed the finish line this year, the General Assembly’s proposals followed a nationwide trend of conservative politicians rolling back such measures.?
The Senate did concur with a House floor amendment to Senate Bill 191, a postsecondary funding bill, that would prohibit the use of “any race-based metrics or targets in the formulas” for the higher education funding model.?
]]>https://www.academytrans.com/2024/04/15/open-records-loopholes-die-in-kentucky-senate-attempt-to-revive-anti-dei-bill-also-fails/feed/0Kentuckians lacked forecasting, broadband as July storms quickly swelled into deadly flood
https://www.academytrans.com/2023/03/27/kentuckians-lacked-forecasting-broadband-as-july-storms-quickly-swelled-into-deadly-flood/
https://www.academytrans.com/2023/03/27/kentuckians-lacked-forecasting-broadband-as-july-storms-quickly-swelled-into-deadly-flood/#respond[email protected] (Anya Slepyan)[email protected] (Claire Carlson, The Daily Yonder)Mon, 27 Mar 2023 09:50:38 +0000https://www.academytrans.com/?p=3927
Terry Thies adjusts the post of the bed that belonged to her mother. It’s the same bed she woke up in to find that her home had flooded overnight last July. (Photo by Xandr Brown.)
Terry Thies wasn’t worried about the rain that pounded on her roof last July.?
She had received no flood warnings before going to sleep that night. Besides, her part of rural Perry County in Eastern Kentucky often gets heavy rain.
So early the next morning when her foot hit the water lapping the bottom of her wooden bed frame, Thies’ first thought was that the toilet had overflowed. But as she scanned her bedroom for the water’s source, she realized this was something else entirely.?
“I came into the kitchen and opened the door and water was flowing down the lane,” Thies said. “Water was in my yard and rushing down. And I was like, well, I guess I’ve been flooded.”?
In the days leading up to the storm, the National Weather Service predicted heavy rain and a moderate risk of flooding across a wide swath of eastern Kentucky and West Virginia. What happened instead was a record-breaking four-day flood event in eastern Kentucky that killed a confirmed 43 people and destroyed thousands of homes.?
And though the National Weather Service issued repeated alerts, many people received no warning.
“Not a soul, not one emergency outlet texted me or alerted me via phone,” Thies said.?
“Nobody woke me up.”?
Thies’ experience in the July floods reveals troubling truths about Kentucky’s severe weather emergency alert systems. Imprecise weather forecasting and spotty emergency alerts due to limited cellular and internet access in rural Kentucky meant that Thies and many others were wholly unprepared for the historic flood.?
Efforts to improve these systems are underway, but state officials say expansions to broadband infrastructure will take at least four years to be completed in Kentucky’s most rural counties. In a state where flooding is common, these improvements could be the difference between life and death for rural Kentuckians.?
But there’s no guarantee they’ll come before the next climate change-fueled disaster.??
An urban bias in forecasting
The first system that failed eastern Kentuckians in July was the weather forecasting system, which did not accurately predict the severity of the storm. A built-in urban bias in weather forecasting is partially to blame.?
“Did we forecast (the storm) being that extreme? No, we didn’t,” said Pete Gogerian, a meteorologist at the National Weather Service station in Jackson, Kentucky, which serves the 13 eastern Kentucky counties affected by the July floods.?
For the days preceding the storm, the Jackson station warned of a ‘moderate risk’ of flooding across much of their service area. Observers with the benefit of hindsight might argue that a designation of ‘high risk’ would have been more appropriate. But Jane Marie Wix, a meteorologist at the Jackson station, wrote in an email to the Daily Yonder that the high-risk label is rarely issued, and simply didn’t match what the model was predicting for the July storms.?
“When we have an event of this magnitude, we’ll go back and look at, are there any indicators? Did we miss something? Was there really any model predicting this kind of event?” Gogerian said. “But when you looked at (the flooding in) eastern Kentucky, it just wasn’t there.”
“I don’t think anyone could have predicted just how bad it was going to end up being,” Wix wrote.??
Wix says the moderate risk warning was enough to warn people that the storm could have severe impacts in many locations. But the model’s inaccuracy demonstrates a flaw in the National Weather Service forecasting model system that was used at the time of the flood.?
Extreme weather is hard to predict in any setting, but rural regions like eastern Kentucky are at an additional disadvantage due to an urban bias baked into national weather forecasting systems, according to Vijay Tallapragada, the senior scientist at the National Weather Service’s Environmental Modeling Center.?
Forecasting models depend on observational data — information about past and present weather conditions —to predict what will come next. But there’s more data available for urban areas than for rural areas, according to Tallapragada.?
“Urban areas are observed more than rural areas … and that can have some, I would say, unintended influence on how the models perceive a situation,” he said.
Although spaceborn satellites and remote sensing systems provide a steady supply of rural data, other methods of observation, like aircraft and weather balloons, are usually concentrated in more densely populated areas.
“Historically, many weather observations were developed around aviation, so a lot of weather radars are located at major airports in highly populated cities,” said Jerry Brotzge, Kentucky state climatologist and director of the Kentucky Climate Center. “That leaves a lot of rural areas with less data.”?
Weather prediction models are based on past events, so the lack of historical weather data in rural areas poses a serious challenge for future predictions, according to Brotzge. “For large areas of Appalachia, we just don’t know the climatology there as well as, say, Louisville or some of the major cities,” he said.
This lack of current and historical weather observation can leave rural areas vulnerable to poor weather forecasting, which can have catastrophic results in the case of extreme weather events.?
Rural forecasting solutions?
A new forecasting model, however, could close the gap in rural severe weather prediction.?
The new Unified Forecast System is being developed by the National Weather Service and a group of academic and community partners. The modeling system is set to launch in 2024, but the results so far are promising, according to Tallapragada.
“In the next couple years, we will see a revolutionary change in how we are going to predict short-range weather and the extremes associated with it,” he said.
The problem with the current system, said Tallapragada, is that it depends on one model to do all the work.
A new application called the Rapid Refresh Forecast System is set to replace that single model with an ensemble of 10 models. Using multiple models allows meteorologists to introduce more statistical uncertainty into their calculations, which produces a broader, and more accurate, range of results, according to Tallapragada. He said that although the new system is not yet finished, it has already proven to be on par with, or better than, the current model.?
The Rapid Refresh Forecasting System will mitigate the disparity between urban and rural forecasting because it depends more on statistical probabilities and less on current and historical observational data, which is where the biggest gap in rural data currently lies, according to Tallapragada.
The system could also mean improved accuracy when it comes to predicting severe weather, like Kentucky’s July flood event.
“The range of solutions provided by the new system will capture the extremes much better, independent of whether you are observing better or poorly,” Tallapragada said. “That’s the future of all weather prediction.”
As extreme weather events become more common due to climate change, this advancement in weather forecasting has the potential to transform local and regional responses to severe weather. But without massive investments in broadband, life-saving severe weather alerts could remain out of reach for rural communities.
The crucial role of broadband
Over a year before the July 2022 floods devastated eastern Kentucky, some counties in the same region were hit by floods that, while not as deadly, still upended lives.
“There were no warnings for that flood,” said Tiffany Clair, an Owsley County resident, in a Daily Yonder interview. “It was fast.”
Clair received no warning when extreme rains hit her home in March of 2021, which severely damaged nearby towns like Booneville and Beattyville. “I did not think that those (towns) would recover,” Clair said.
Businesses and homes were impaired for months after the flood, affecting not only the people in those communities but those from neighboring communities as well.
“We live in a region where we travel from township to township for different things, and (the March 2021 floods) were a blow to the region and to the communities, because we’re kind of interlocked around here,” Clair said. “It’s part of being an eastern Kentuckian.”
A little over a year later, Clair faced more flooding, this time enough to displace her and her children. They now live with Clair’s mother.
This time around, Clair did receive an emergency warning, but questioned the method through which these warnings were sent. “(The warnings) did go all night, the last time, in July,” Clair said. “But if you don’t have a signal or if your phone’s dead, how are you getting those?”
During severe weather events, people are alerted of risk through a handful of ways. Weather information reported from regional National Weather Service offices is disseminated through local TV and radio stations, specialized weather radios, and the Federal Emergency Management Agency’s wireless emergency alert
But in rural eastern Kentucky in July, the most common way people learned about the flooding was by seeing the water rise firsthand, according to a report from the Kentucky Department of Public Health.?
The agency surveyed people from over 400 households in Breathitt, Clay, Floyd, Knott, Letcher, Owsley and Perry counties, as well as displaced residents living in three shelter sites. The goal of the study was to understand how the floods affected Kentuckians and determine ways to better prepare for the next emergency.?
Nearly 14 percent of households in Letcher, Knott, Owsley and Perry counties and 28 percent of households in Breathitt, Clay, Floyd and Pike counties reported difficulty accessing internet, television, radio, and cell service for emergency communications during the floods. Cell phone service and internet access were the top two communication methods residents reported the most difficulty accessing.??
The floods killed a confirmed 43 people: 19 from Knott County, 10 from Breathitt, seven from Perry, four from Letcher, two from Clay, and one from Pike County. Several more people died after the floods due to related health complications.?
In Knott and Breathitt County, where death counts were the highest, approximately 32 percent of residents do not have broadband access, according to U.S. Census Bureau data. And in 10 of the 13 counties flooded in July, more than a quarter of residents lack broadband access.?
Rural areas across the country are underserved when it comes to broadband, but eastern Kentucky is a special trouble spot, where high costs to serve rural customers have stopped internet companies from setting up broadband in rural areas. In 2017, Kentucky ranked 47th in the nation for broadband access, according to the Kentucky Communications Network Authority.?
“There’s a lot of frustration because a lot of these internet service providers are profit-based companies,” said Meghan Sandfoss, executive director of the state’s newly created Office of Broadband Development. “So it’s hard for them sometimes to make a business case for the more remote and low density locations.”
The state’s effort to expand broadband has sputtered for years due to missteps by government officials, according to Propublica reporting. An internet connectivity project, KentuckyWired, was launched in 2013 with the goal to construct 3,000 miles of high-speed fiber optic cable in every Kentucky county by 2018. The project didn’t reach its final steps until fall of 2022, according to a KentuckyWired construction map.
Getting the cable laid down is only one part of the process: for individual households and businesses to actually access the internet, third-party providers need to connect their own fiber systems to the network, according to the Kentucky Communications Network Authority. This “last-mile” infrastructure is critical to broadband expansion, but progress has been slow.?
“That might be another 10 years or 20 years while all that last-mile stuff gets built,” said Doug Dawson, a telecommunications consultant, in a ProPublica interview from 2020.?
To speed up this process, both the state and federal governments have recently directed funds toward improved internet connectivity and last-mile infrastructure.?
In June of 2022, Kentucky Governor Andy Beshear announced a $203 million investment in last-mile infrastructure funded through the American Rescue Plan Act. Another $20 million of grants was opened in September for broadband providers to replace utility poles that provide connectivity in underserved areas. And early this year, another $182 million in federal funding was awarded to fund Kentucky’s “Better Internet” grant program.?
This grant program is focused on making it more commercially feasible for private internet providers to reach rural areas, said Sandfoss from the Office of Broadband Development. The priority is to build broadband infrastructure in unserved locations where there is no internet, versus under-served locations with limited internet access.
“A frustration we hear frequently is that all these new locations are being connected and everybody else has to wait,” Sandfoss said. “But that’s just the federal funding priority, and that’s the way we’ve got to do it.”?
Construction on the state’s broadband infrastructure expansions is expected to occur over the next four years.
As extreme weather continues to batter rural Kentucky — floods in February killed one person in rural Marion County – some locals aren’t waiting for governmental changes to better protect themselves in the face of disaster.?
Terry Thies, whose childhood home was flooded in July, has decided to sell her house.
“Now that it has flooded, it will probably flood again,” Thies said. She plans to move up the mountain, away from the creek that damaged her home. “I just don’t want to go through it again.”
But for Kentuckians who don’t have the financial means to move away from higher-risk flood areas, they may be stuck in place. Eastern Kentucky is in the middle of a major housing crisis: affordable housing is sparse, buildable land outside flood zones is limited, and construction costs for new homes can be prohibitively expensive.?
“(The flood) was horrible, but we were very, very lucky,” said Tiffany Clair, whose home was destroyed in the July flood. Clair and her children were able to move in with her mother when they lost housing. “But the next time I don’t think we’ll be that lucky.”
Clair believes that rural Kentucky’s ability to withstand the next natural disaster hinges on the actions taken by local and state leaders.?
“We can’t do anything to prepare for it. It is going to take our leaders, it is going to take our politicians,” she said.?
“They’re the ones that have to prepare for it because we can’t.”
Additional reporting by Caroline Carlson and Xandr Brown.
Grist is a nonprofit, independent media organization dedicated to telling stories of climate solutions and a just future. Learn more at Grist.org
]]>https://www.academytrans.com/2023/03/27/kentuckians-lacked-forecasting-broadband-as-july-storms-quickly-swelled-into-deadly-flood/feed/0TikTok banned from Kentucky government devices and networks
https://www.academytrans.com/briefs/tiktok-banned-from-kentucky-government-devices-and-networks/
[email protected] (McKenna Horsley)Thu, 23 Mar 2023 22:52:48 +0000https://www.academytrans.com/?post_type=briefs&p=3848
(Photo Illustration by Drew Angerer/Getty Images)
Democratic Gov. Andy Beshear signed legislation Wednesday codifying recent bans on social media website TikTok from state government-owned devices and networks into law.?
While presenting the bill in past debates, primary sponsor Sen. Robby Mills, R-Henderson, has cited when presenting the bill concerns from the FBI about national security risks posed by TikTok and its capacity to influence users or their devices. The social media site and popular mobile app allows users to share minutes-long videos. It’s owned by Chinese tech giant ByteDance
“Most Chinese companies are connected, directed or partially owned by the Chinese government,” Mills said on the Senate floor earlier this legislative session. “It has been reported on multiple news sources and confirmed that TikTok mines huge amounts of private data which the Chinese government, a foreign adversary of the United States, would have access to.”
Under the new law, which took effect immediately, the Commonwealth Office of Technology and the legislative branch must? implement controls to block the site on state-issued devices and networks. The judicial branch may also implement a ban or restrictions.?
Earlier this year, executive branch employees were barred from downloading or using TikTok or other sites owned by ByteDance on state-issued devices. The Legislative Research Committee also banned the use of TikTok on LRC-issued devices on Jan. 20.?
On Thursday, TikTok Chief Executive Shou Zi Chew testified before members of the U.S. House amid national security concerns over the use of TikTok and potential Chinese government influence on the platform. The Wall Street Journal reported he said the company was committed to firewalling data from U.S. users from “all unwanted foreign access” and would keep content “free from any manipulation from any government.”
]]>House joins Senate in putting TikTok ban for Kentucky employees into law
https://www.academytrans.com/briefs/house-joins-senate-in-putting-tiktok-ban-for-kentucky-employees-into-law/
[email protected] (McKenna Horsley)Wed, 15 Mar 2023 21:29:12 +0000https://www.academytrans.com/?post_type=briefs&p=3583
(Photo Illustration by Drew Angerer/Getty Images)
FRANKFORT — With time running out for the 2023 General Assembly to pass bills, House members approved a measure to codify an executive and legislative branch ban on using TikTok.
The House voted 96-3 on Senate Bill 20 Wednesday. Those who voted no were Democratic Reps. Cherlynn Stevenson, Lisa Willner and Sarah Stalker.?
The bill’s lead sponsor, Sen. Robby Mills, R-Henderson, has previously cited when presenting the bill concerns from the FBI about national security risks posed by TikTok and its capacity to influence users or their devices. The social media site, where users can share minutes-long videos, is owned by Chinese tech giant ByteDance.?
Earlier this session, no senators objected to the bill in their chamber and gave it bipartisan support.??
If the bill becomes law, the Commonwealth Office of Technology and the legislative branch would have to implement controls to block the site on state-issued devices and networks. The judicial branch may also implement a ban or restrictions.?
The bill’s House sponsor, Rep. Scott Sharp, R-Ashland, echoed Mills’ concerns and said: “The FBI has warned that the Chinese can use this application to influence and spy on its users.”?
A House committee substitute version of the bill says the ban would not apply to Kentucky colleges and universities. Additionally, agencies that determine a need to use the app in instances like law enforcement activities, civil investigations or security threat research could use Tik Tok if the agency takes steps to not endanger its network or another state government-controlled network.?
Earlier this year, executive branch employees were barred from downloading or using TikTok or other sites owned by ByteDance on state-issued devices. The Legislative Research Committee also banned the use of TikTok on LRC-issued devices on Jan. 20.?
Governors in other states, including Arkansas, Ohio and Alaska, have introduced similar bans.?
A U.S. Senate bill, which has received bipartisan and White House support, would bolster the U.S. Commerce Department’s ability to “review deals, software updates or data transfers by information and communications technology in which a foreign adversary has an interest” for national security risks, CNBC reported.
]]>Senate committee forwards bill to codify TikTok ban on Kentucky government devices?
https://www.academytrans.com/briefs/senate-committee-forwards-bill-to-to-codify-tiktok-ban-on-kentucky-government-devices/
[email protected] (McKenna Horsley)Wed, 08 Feb 2023 20:46:45 +0000https://www.academytrans.com/?post_type=briefs&p=2398
FRANKFORT — A Kentucky Senate committee on Wednesday endorsed putting into state law a ban on TikTok on state government devices or networks.
TikTok, a social media platform where content creators can make minutes-long videos about a variety of topics, is commonly used by teenagers and young adults.?
The Senate State and Local Government Committee voted 9-0 to forward Senate Bill 20 with a favorable recommendation. A similar measure has been introduced in the House, though it is still in the Committee on Committees.?
Sponsors on the Senate Bill are Sen. Robby Mills, R-Henderson; Sen. Gary Boswell, R-Owensboro; Sen. Donald Douglas, R-Nicholasville; Sen. Stephen Meredith, R-Leitchfield; and Sen. Phillip Wheeler, R-Pikeville.
Mills, who presented the bill to the committee, said it is intended to make the ban permanent in Kentucky state law, following recent actions by the executive branch and the Legislative Research Commission to keep TikTok off state devices..?
“The legislature’s affirmative action on this bill will place this ban in statute, so it will not timeout as an executive policy will or can do in future times,” Mills told the committee. “We need to protect the data that exists on our state devices.”?
Last month, the Kentucky’s employee handbook, which covers the executive branch, was updated to include a social media policy barring the use of social media sites owned by ByteDance Limited, such as TikTok. At least one state government account, the Kentucky Department of Tourism’s account, was deleted after the change.?
The proposals in Kentucky’s General Assembly come amid calls from state and national lawmakers to ban TikTok amid possible security concerns related to the site’s parent company, ??Chinese tech giant, ByteDance.?
In December, the company said four employees, two based in the U.S. and two in China, violated company policy and inappropriately accessed U.S. users’ data, the New York Times reported. NPR reported in November FBI Director Christopher Wray said in a U.S. House Homeland Security Committee hearing that the bureau was concerned about the Chinese government potentially using the app to collect data or influence American operations.?
After the committee meeting, Mills said that Senate leadership is in favor of moving the bill forward.?
During the meeting, Sen. Gex Williams, R-Verona, noted that the bill says the judicial branch “may” implement controls while saying the executive and legislative branches “shall”? put controls in place. Mills said drafters chose “may” was included by drafters to avoid the legislative branch directing the judicial branch. Williams indicated he would like to see a floor amendment on this part of the bill.?
]]>Kentuckians encouraged to challenge accuracy of new broadband map
https://www.academytrans.com/2022/12/28/kentuckians-encouraged-to-challenge-accuracy-of-new-broadband-map/
https://www.academytrans.com/2022/12/28/kentuckians-encouraged-to-challenge-accuracy-of-new-broadband-map/#respond[email protected] (Liam Niemeyer)Wed, 28 Dec 2022 12:00:55 +0000https://www.academytrans.com/?p=1092
Gov. Andy Beshear is encouraging Kentuckians to challenge the accuracy of a new federal broadband availability map that will help determine how billions of federal dollars for broadband deployment will be allocated among states.?
The Federal Communications Commission in November released a proposed online interactive map that lists available internet providers and the maximum advertised internet speeds from those providers for individual addresses of residences, businesses and other locations across the country. The map also includes the type of broadband technology each provider uses for reaching each address, such as fiber, cable or satellite-based internet.?
The FCC is accepting challenges to fix inaccuracies for the new map until Jan. 13 based on location and internet provider availability. Such inaccuracies include if an address is inaccurate or missing on the new map or if a listed internet provider doesn’t actually serve a particular household or business.?
“Access to high-speed, reliable internet service is vital infrastructure as critical to our connectivity as roads and bridges,” Beshear said in a statement. “But we must pinpoint where access is most needed to ensure we invest these dollars wisely. That’s why reviewing this FCC map is so important.”
Online users of the map can search for their address and click on “Location Challenge” or “Availability Challenge” through the online interface to fix inaccuracies.
Users of the map can submit challenges if the advertised internet speed a provider lists doesn’t match the actual speed received, but such challenges will be considered instead as a consumer complaint to the FCC.
This new map is an update of a previous version that showed only internet providers and advertised speeds for each census block, the smallest geographical unit used by the U.S. Census Bureau, something that broadband expansion advocates have criticized as inaccurate. For example, if an internet provider said it provided internet access to just one house in a census block, the old map would mark all homes in an entire census block as served by the provider.?
Fixing inaccuracies in the new map could improve it and help determine how much funding from the federal Broadband Equity, Access and Deployment (BEAD) program that Kentucky receives. The BEAD program was launched through the passage of the Bipartisan Infrastructure Law in 2021, doling out more than $40 billion to expand broadband access across the country.?
Kentucky received an initial $100 million from BEAD to boost each state’s coordination with local communities in planning broadband deployment, but how much additional funding the state gets to build broadband connection is determined in part by how many “unserved” locations a state has. Under BEAD, a home or business is considered “unserved” if it has no access to broadband or lacks access to internet speeds above 25 megabits per second download and 3 megabits per second upload.?
Legislation passed with bipartisan support in the Kentucky legislature the past two years has allocated $300 million in federal funding from the American Rescue Plan Act toward broadband deployment in the state. The state has so far given out some of that funding through a first round of grants for broadband deployment to electric cooperatives and local governments. The lion’s share of that funding round went to Charter Communications, one of the largest cable companies in the country.?
Meghan Sandfoss, executive director of the Kentucky Office of Broadband Deployment, said in a statement Kentuckians participating in the challenge process for the map will help make sure the state receives its “fair share of funding” through the BEAD program.?
“Today’s debut marks the start of the public’s ability to offer challenges as well. The FCC has asked for challenges to the map data to be submitted between now and January 13, 2023, so that corrections can be included in a finalized version of the map.”
The final version will be used to set funding allocations for the BEAD program in summer 2023.
While the FCC will continue collecting crowdsourced speed data for fixed speeds, that data is not part of the challenge process. Rather, the map is relying on maximum available advertised speeds.”