Please submit your News Bytes items in plain text; other formats may be rejected without reading. [You have been warned!] A one- or two-paragraph summary plus a URL has a much higher chance of being published than does an entire press release. Submit items to firstname.lastname@example.org. Deividson can also be reached via twitter.
News in General
Linux Foundation Updates Linux Development Study
The Linux Foundation is publishing an update to its April 2008 study on Linux kernel development. The new report reveals trends in Linux development, and hints at topics for an upcoming LinuxCon kernel panel in September.
The new report is written by original authors and kernel developers Jonathan Corbet and Greg Kroah-Hartman, and the Linux Foundation's Amanda McPherson. The August 2009 Update reprises the title "Linux Kernel Development: How Fast it is Going, Who is Doing It, What are They Doing, and Who is Sponsoring It" and is available at http://www.linuxfoundation.org/publications/whowriteslinux.pdf.
The updated study shows a ten-percent increase in the number of developers contributing to each kernel release since April 2008, and that a net of 2.7 million lines of code have been added. This also means an average of 5.45 patches being accepted per hour, up over 40 percent since the original study. Some of the accelerated pace is related to new demand for Linux in emerging markets, such as netbooks, auto, and energy, as well as the new linux-next tree for the next kernel cycle that scales up the development process.
Highlights in the report show that every Linux kernel is developed by nearly 1,000 developers working for more than 200 different corporations; Red Hat, Google, Novell, Intel, and IBM top the list of companies employing developers; an average of 10,923 lines of code are added a day, a rate of change larger than that of any other public software project.
Corbet and Kroah-Hartman will participate on a panel at LinuxCon (http://events.linuxfoundation.org/events/linuxcon/) focused on the kernel development process, and explore some of the trends that surfaced in the new study. Linux creator Linus Torvalds and kernel community members James Bottomley, Arjan van de Ven, and Chris Wright will join them on the LinuxCon keynote panel on Monday, September 21.
Corbet and Kroah-Hartman, also members of the Linux Foundation's Technical Advisory Board (TAB), reviewed the last six kernel releases, from 2.6.24 through 2.6.30, representing about 500 days of Linux development. The report goes into detail on how the Linux development process works, including who is contributing, how often, and why.
Jonathan Corbet is also the editor of Linux information source LWN.net (http://www.lwn.net/), and maintains the Linux Foundation's Linux Weather Forecast (http://www.linuxfoundation.org/collaborate/lwf/).
Greg Kroah-Hartman is a Novell Fellow, working for the SUSE Labs division of the company. He is also the Linux kernel maintainer for the USB, driver core, debugfs, kref, kobject, and sysfs kernel subsystems, and leads the Linux Driver Project: http://www.linuxdriverproject.org/.
Amanda McPherson is vice president of marketing and developer programs at the Linux Foundation, and leads its community relations and event activities.
Coalition to Promote Benefits of Open Source Software in Government
During July, a broad cross-section of more than 70 companies, academic institutions, community groups, and individuals joined together to announce the formation of Open Source for America, an organization that will be a unified voice for the use of open source software in the U.S. Federal government arena. To learn more about the coalition, visit http://www.opensourceforamerica.org/.
Gartner recently estimated that, by 2011, more than 25 percent of government vertical, domain-specific applications will either be open source, contain open source application components, or be developed as community source.
The mission of Open Source for America is to serve as an advocate and encourage broader U.S. Federal government participation in free and open source software. Specifically, Open Source for America will help effect change in policies and practices to allow the Federal government to better use these technologies, help coordinate these communities to collaborate with the Federal government on technology requirements, and raise awareness and create understanding among Federal government leaders about the values and implications of open source software.
The diverse Board of Advisors of Open Source for America includes respected leaders such as Roger Burkhardt, Rishab Ghosh, Marv Langston, Chris Lundburg, Dawn Meyerriecks, Eben Moglen, Arthur L. Money, Tim O'Reilly, Stormy Peters, Simon Phipps, Mark Shuttleworth, Paul Smith, Dr. Doug Stone, Michael Tiemann, Andy Updegrove, Bill Vass, Tony Wasserman, and Jim Zemlin.
Founding members of Open Source for America include: Acquia, Alfresco Software, Advanced Micro Devices, Inc., Jono Bacon, Black Duck Software, Inc., Josh Berkus, Ean Schuessler, BrainFood, Canonical, CodeWeavers, CollabNet, Colosa, Inc., Continuent, Danese Cooper, Crucial Point LLC, Josh Davis, Debian, Democracy in Action, Electronic Frontier Foundation, EnterpriseDB, Bdale Garbee, GNOME Foundation, Google, JC Herz, ibiblio.org, Ingres Corporation, Jaspersoft, Mitch Kapor, Kapor Capital, KnowledgeTree, Marv Langston, The Linux Foundation, Linux Fund, Inc., Lucid Imagination, Geir Magnusson, Jr., Medsphere, Mehlman Vogel Castagnetti, Mercury Federal Systems, Monty Widenius, Monty Program AB, Mozilla, North Carolina State University Center for Open Software Engineering, Novell, Open Solutions Alliance, Open Source Initiative, Open Source Institute, Oracle, O'Reilly Publishing, Oregon State University Open Source Lab, Open Source Software Institute, Pentaho, RadiantBlue, Red Hat, Relative Computing Environments LLC., REvolution Computing, Walt Scacchi, Institute for Software Research at UC Irvine, Software Freedom Law Center, SpikeSource, SugarCRM, Sunlight Labs, Sun Microsystems, School of Engineering, University of California, Merced, University of Southern Mississippi, Andy Updegrove, Gesmer Updegrove LLP, Tony Wasserman, Center for Open Source Investigation, Carnegie Mellon Silicon Valley, Zenoss, Inc., Zimbra, and Zmanda.
For recent open source developments in government, go here: http://www.opensourceforamerica.org/case-studies/
Google Announces Plan for New Linux-based Chrome OS
In July, Google informed readers of its corporate blog that it was working on a new, light-weight operating system to support a new Web application platform for PCs and mid-range Internet devices. This will be based on the Linux kernel, but will have a new windowing sub-system. Since it was designed to work with the Chrome Web browser, Google referred to its planned OS as the Chrome OS.
"Google Chrome OS is an open source, lightweight operating system that will initially be targeted at netbooks. Later this year, we will open-source its code, and netbooks running Google Chrome OS will be available for consumers in the second half of 2010."
The blog entry goes on to draw some distinctions with its on-going Android OS project for smart phones: "Google Chrome OS is a new project, separate from Android. Android was designed from the beginning to work across a variety of devices from phones to set-top boxes to netbooks. Google Chrome OS is being created for people who spend most of their time on the Web, and is being designed to power computers ranging from small netbooks to full-size desktop systems."
The goals for the new operating systems are speed, simplicity, and security as well as expanding the power of Web applications. A fast boot of only a few seconds is another design goal.
A few weeks earlier at the O'Reilly Velocity Conference, Google VP of Search Product and User Experience Marissa Mayer spoke at a keynote and announced Google initiatives to improve the performance of both Web pages and Web applications. These included the public release of new performance measuring tools such as PageSpeed and the Google "Speed" Web site, which will feature discussions and tech talks on performance issues. See http://code.google.com/speed/.
In her keynote, Mayer said that Google would continue to work on improving performance at the page design level, the browser level, and the server level. Chrome OS is clearly part of that effort as well.
In June, Google created a Web site focused on making Web applications, sites, and browsers faster. The developer site supports Google's decision to share its best practices with tutorials, tips, and performance tools. Google hopes to make the Web faster by aiding developers interested in Web application performance.
Google made the announcement at the June O'Reilly Media's Velocity conference in San Jose, California, an event focused on Web performance. Featured in several Google-sponsored presentations, the Web site offers new performance tools such as Page Speed, an augmented version of YSlow to analyze the interaction between Web browsers and Web servers.
Back in December, Google announced its Native Client Code project, which aims to create a secure framework for running native code over the Web. The goal here is to develop "...a technology that seeks to give Web developers the opportunity to make safer and more dynamic applications that can run on any OS and any browser." One aspect of this is to use reliable disassembly, and a code validator to determine if any executable includes unsafe x86 instructions. See: http://googleonlinesecurity.blogspot.com/2008/12/native-client-technology-for-running.html
According to Amanda McPherson, VP of Marketing and Developer Programs at The Linux Foundation, "...Google's Native Application Project will be a key part of this OS," which will result in making "...more of the computing power of the device than through the normal app/browser paradigms of today." She adds that this approach can minimize the current advantages of (Windows) native applications and "...should grow the stable of applications for Chrome, and every other Web browser who makes use of this technology."
She sees the Google model as providing better performance and boot times on devices that are cheaper to make and don't carry the Microsoft OS tax. Her blog entry is available here: http://www.linux-foundation.org/weblogs/amanda/2009/07/09/what-is-googles-goal-with-its-chrome-os/
In his blog posting on the day of the announcement, Jim Zemlin, executive director of the Linux Foundation, wrote that this is a victory for Linux and its community development model: "We look forward to seeing Google collaborate closely with the Linux community and industry to enhance Linux as the foundation for this new computing model."
See his full posting at: http://www.linuxfoundation.org/news-media/blogs/browse/2009/07/linux-clear-winner-google-os-news/. In its Chrome OS FAQ, Google mentions that it is working with Acer, Adobe, ASUS, Freescale, Hewlett-Packard, Lenovo, Qualcomm, Texas Instruments, and Toshiba, among others.
Gartner, IDC Report PC Market Down Less than 5%
The PC market fell less than expected in the last quarter, according to research firms Gartner and IDC. Both firms released numbers in mid-July showing the global PC market fell less than the 6.3-9.8 percent decline expected. Gartner said the market slipped 5 percent, while IDC said the market fell only 3.1 percent.
In terms of market share, Hewlett-Packard grew its lead as the #1 PC vendor with almost 20% of the market, while Dell slipped further to about a 14% share. Acer was #3 with a 12.7% market share.
Sandia Labs boots one million Linux virtual machines
Computer scientists at Sandia National Laboratories in Livermore, California, have successfully run more than a million Linux kernels as virtual machines.
The achievement will allow cyber-security researchers to model behavior found in malicious botnets, or networks of infected machines on the scale of a million nodes.
Sandia scientists used virtual machine (VM) technology and the power of its Thunderbird supercomputing cluster for the demonstration.
Running a high volume of VMs on one supercomputer - at a similar scale as a botnet - allows cyber-researchers to watch how botnets work and explore ways to stop them.
Previously, researchers had only been able to run up to 20,000 kernels concurrently. (A "kernel" is the central component of most computer operating systems.) The more kernels that can be run at once, he said, the more effective cyber-security professionals can be in combating the global botnet problem. "Eventually, we would like to be able to emulate the computer network of a small nation, or even one as large as the United States, in order to 'virtualize' and monitor a cyber-attack," he said.
A related use for tens of millions of operating systems, Sandia's researchers suggest, is to construct high-fidelity models of parts of the Internet.
"The sheer size of the Internet makes it very difficult to understand in even a limited way," said Sandia computer scientist Ron Minnich. "Many phenomena occurring on the Internet are poorly understood, because we lack the ability to model it adequately. By running actual operating system instances to represent nodes on the Internet, we will be able not just to simulate the functioning of the Internet at the network level, but to emulate Internet functionality."
The Sandia research, two years in the making, was funded by the Department of Energy's Office of Science, the National Nuclear Security Administration's (NNSA) Advanced Simulation and Computing (ASC) program and by Sandia itself.
To complete the project, Sandia used its Albuquerque-based 4,480-node Dell high-performance computer cluster, known as Thunderbird. To arrive at the one million Linux kernel figure, Sandia's researchers ran 250 VMs on each of the 4,480 physical machines on Thunderbird. Dell and IBM both made key technical contributions to the experiments, as did a team at Sandia's Albuquerque site that maintains Thunderbird and prepared it for the project.
The capability to run a high number of operating system instances inside of virtual machines on a high performance computing (HPC) cluster can also be used to model even larger HPC machines with millions to tens of millions of nodes that will be developed in the future, said Minnich. The successful Sandia demonstration, he asserts, means that development of operating systems, configuration and management tools, and even software for scientific computation can begin now before the hardware technology to build such machines is mature.
"It has been estimated that we will need 100 million CPUs (central processing units) by 2018 in order to build a computer that will run at the speeds we want," said Minnich. "This approach we've demonstrated is a good way to get us started on finding ways to program a machine with that many CPUs." Continued research, he said, will help computer scientists to come up with ways to manage and control such vast quantities, "so that when we have a computer with 100 million CPUs we can actually use it."
Ultra-large clusters can be used for modeling climate change, developing new medicines, and research into the efficient production of energy. "Development of this software will take years, and the scientific community cannot afford to wait to begin the process until the hardware is ready," said Minnich.
Bing Trims Yahoo's Search Share, not Google's
On the search front, in spite of its multi-million dollar advertising campaign and positive critical reviews, the new search engine "Bing" hardly moved the dial on search statistics. According to a report from comScore.com, in June Google held steady at 65% of searches. Microsoft sites rose from 8% to 8.4%, while Yahoo sites fell from 20.1% to 19.6%.
In a related conference call, Google told the press that it sees users writing longer and more sophisticated search queries. That may account for Google's stable statistics. Google is trying to respond with new options and more specialized search domains.
Americans conducted 14 billion searches in June, down slightly from May. Google Sites accounted for 9.1 billion searches, followed by Yahoo! Sites (2.8 billion), Microsoft Sites (1.2 billion), Ask Network (552 million) and AOL LLC (439 million). Facebook.com experienced the highest growth of the top ten expanded search properties with a 9% increase.
For more information, visit comScore.com and read the press release at http://comscore.com/index.php//Press_Events/Press_Releases/2009/7/comScore_Releases_June_2009_U.S._Search_Engine_Rankings/.
Conferences and Events
- JBoss World / Red Hat Summit
September 1 - 4, 2009, Chicago, IL
- IT Roadmap 2009
September 2, Sheraton Hotel, Dallas, TX
- Forrester's Security Forum 2009
September 10 - 11, Hyatt La Jolla, San Diego, CA
- Digital ID World 2009
September 14 - 16, Rio Hotel, Las Vegas, NV
- Ajax Experience 2009
September 14 - 16, Boston, MA
- SecureComm 2009
September 14 - 18, Athens, Greece
- SOURCE Barcelona 2009
September 21 - 22, Museu Nacional D'art de Catalunya, Barcelona, Spain.
- Intel Developer Forum 2009
September 22 - 24, Moscone Center, San Francisco, CA
- 1st Annual LinuxCon
September 21 - 23, Portland, OR
- 2nd Annual Linux Plumbers Conference
September 23 - 25, Portland, OR
- European Semantic Technology Conference
September 30 - October 2, Vienna, Austria
- Adobe MAX 2009
October 4 - 7, Los Angeles, CA
- Interop Mumbai
October 7 - 9, Bombay Exhibit Center, Mumbai, India
- Oracle OpenWorld 2009
October 11 - 15, San Francisco, CA
- Germany Scrum Gathering 2009
October 19 - 21, Hilton Munich City, Munich, Germany
- Web 2.0 Summit 2009
October 20 - 22, San Francisco, CA
- 1st Annual Japan Linux Symposium
October 21 - 23, Tokyo, Japan
- LISA '09 - 23rd Large Installation System Administration Conference
November 1 - 6, Marriott Waterfront Hotel, Baltimore, MD
- Cloud Computing & Virtualization 2009
November 2 - 4, Santa Clara Convention Center, Santa Clara, CA
- VoiceCon-SF 2009
November 2 - 5, San Francisco, CA
- 2nd Annual Linux Foundation End User Summit
November 9 - 10, Jersey City, NJ
- Interop New York
November 16 - 20, New York, NY
- Web 2.0 Expo New York
November 16 - 19, New York, NY
- QCon Conference 2009
November 18 - 20, Westin Hotel, San Francisco, CA
SimplyMEPIS 8.0.10 Released
The beginner-friendly distro and live CD SimpleMEPIS has just released a new version, based on Debian 5.0 "Lenny", enhanced with a long-term support kernel and with the MEPIS Assistant applications, aiming to create an always updated, easy-to-use system.
The new version includes, along with the updates from "Lenny", updates to the MEPIS installer and MEPIS utilities, plus several package updates, including Firefox 3.5.2, Google Gadgets 0.11.0, and much more.
Fedora 12 Alpha GA Announced
The first public development release of Fedora 12 "Constantine" was announced last month, and it includes several new features, including out-of-the-box support for many new webcam models, several updated packages, a better free video codec, PackageKit improvements, better power management, and several other updates.
More information can be found here: https://fedoraproject.org/wiki/Fedora_12_Alpha_Announcement/ and download links can be found here: http://mirrors.fedoraproject.org/publiclist/Fedora/12-Alpha/.
Software and Product News
Canonical releases source code for Launchpad
In July, Canonical, founder of the Ubuntu project, announced it had open-sourced the code that runs Launchpad, the software development and collaboration platform used by tens of thousands of developers. Launchpad is used to build Ubuntu and other FOSS projects.
Launchpad allows developers to host and share code from many different sources using the Bazaar version control system, which is integrated into Launchpad. Translators can collaborate on translations across many different projects. End-users identify bugs affecting one or more projects so that developers can then triage and resolve those bugs. Contributors can write, propose, and manage software specifications. In addition, Launchpad enables people to support each other's efforts across different project hosting services, both through its Web interface and its APIs.
"Launchpad accelerates collaboration between open source projects," said Canonical founder and CEO Mark Shuttleworth. "Collaboration is the engine of innovation in free software development, and Launchpad supports one of the key strengths of free software compared with the traditional proprietary development process. Projects that are hosted on Launchpad are immediately connected to every other project hosted there in a way that makes it easy to collaborate on code, translations, bug fixes, and feature design across project boundaries. Rather than hosting individual projects, we host a massive and connected community that collaborates together across many projects. Making Launchpad itself open source gives users the ability to improve the service they use every day."
"Since the Drizzle project's start in April 2008, its community and contributors have used Launchpad as a platform for managing code and development tasks, and as an efficient method of communication between community members regarding bugs, workflow, code reviews, and more." said Jay Pipes, Core Developer on the Drizzle Project at Sun Microsystems. "Launchpad makes it easy to take all the disparate pieces of software development - bug reporting, source control, task management, and code reviews - and glue them together with an easy-to-use interface that emphasizes public and open community discourse."
Launchpad hosts open source projects for free, but closed source projects use the service for a fee. This means that projects can use the features that Launchpad provides but do not need to share code if that is not desirable. The privacy features are currently in beta, and will be added to the commercial service as they become available.
Technical details about the open-sourcing can be found at https://dev.launchpad.net/.
Penguin Computing Launches HPC in the Cloud
Penguin Computing announced the availability of "Penguin on Demand" - or POD - a new service that delivers high performance computing (HPC) in the cloud. POD is targeted at researchers, scientists, and engineers who require surge capacity for time-critical analyses.
"The most popular cloud infrastructures today, such as Amazon EC2, are not optimized for the high performance parallel computing often required in the research and simulation sciences," said Charles Wuischpard, CEO at Penguin Computing. "POD delivers immediate access to high-density HPC computing, a resource that is difficult or impossible for many users to utilize in a timely and cost-effective way."
POD provides a computing infrastructure of highly optimized Linux clusters with specialized hardware interconnects. Rather than using machine virtualization, as is typical in traditional cloud computing, POD allows users to access a server's full resources and I/O at one time for maximum performance and massive HPC workloads.
Based on high-density Xeon-based compute nodes coupled with high-speed storage, POD provides a persistent compute environment that runs on a head node and executes directly on compute nodes' physical cores. Both GigE and DDR high-performance Infiniband network fabrics are available. POD customers also get GPU supercomputing with Nvidia Tesla processor technology. Jobs typically run over a localized network topology to maximize inter-process communication, to maximize bandwidth, and to minimize latency.
Penguin Computing offers support and services for POD customers, including application set-up, creation of the HPC computing environment, ongoing maintenance, data exchange services, and application tuning. In addition, POD includes persistent storage for local data and user-defined compute environments.
For more information about Penguin on Demand (POD), please go to http://www.penguincomputing.com/POD/Penguin_On_Demand/.
PlateSpin Migrate Supports P-to-V Migrations for Solaris
In July, Novell announced the addition of physical-to-virtual migration support for Sun's Solaris 10 operating system in the latest version of PlateSpin Migrate, their workload management product to move workloads anywhere to anywhere: between physical, image, virtual, and cloud environments.
PlateSpin Migrate 8.1 offers workload migration product support for Solaris Containers, giving customers the ability to migrate workloads from physical to virtual environments. The latest version of PlateSpin Migrate significantly expands the already broad list of platforms supported for physical to virtual migration, by adding support for the recently released SUSE Linux Enterprise 11 from Novell to the existing support for prior versions of SUSE Linux Enterprise. PlateSpin Migrate 8.1 also adds support for Windows 2008 and Windows Vista.
"We expect PlateSpin Migrate 8.1 to make it even easier for customers to take advantage of the power and versatility of Solaris Containers," said Jim McHugh, vice president of Data Center Software Marketing at Sun. "Using PlateSpin Migrate 8.1 to perform physical-to-virtual migration will also help minimize the risk of introducing errors into new configurations and speed the completion of virtualization projects."
PlateSpin Migrate 8.1 makes it easy to migrate workloads between physical servers, image archives and virtual hosts. PlateSpin Migrate also offers performance improvements for business-critical workload migrations, making increased use of block-based transfer technology that transfers only the portion of the file that has changed. This innovation limits the amount of downtime during the migration process, and improves migration performance, especially over slower and expensive WAN connections.
PlateSpin Migrate 8.1 is available now. The Windows/Linux version is priced at $289 for a workload license. PlateSpin Migrate for UNIX is priced at $1,495 for a one-time license. For more information about this announcement, see http://www.platespin.com/products/migrate/.
Sun Releases NetBeans 6.7 with Project Kenai
At the end of June, the NetBeans developer community announced NetBeans Integrated Development Environment (IDE) 6.7. This new version of NetBeans features tight integration with Project Kenai, Sun's collaborative hosting site for free and open source projects. Developers can download the free, full-featured NetBeans IDE 6.7 at http://www.netbeans.org/.
The integration between NetBeans and Kenai allows developers to stay in their IDE and navigate between Kenai.com, local code, bug reports, IM chats, and project wikis. This integration allows developers to discuss, edit, debug and commit code through one easy-to-use interface. Other key features of NetBeans IDE 6.7 include:
- Support for Maven, a community software project management and comprehension tool;
- New "Team" menu provides access to projects on Kenai.com;
- Automated continuous integration system with Hudson, an extensible Java-based solution which makes it easier for developers to integrate changes to their project, and makes it easier for users to obtain a fresh build;
- Improved PHP support, allowing developers to connect with each other and the latest technologies;
- Support for Zembly, a single registry and repository of popular Web APIs. The Zembly Client Library NetBeans plug-in enables developers to discover popular APIs, and with a simple Drag and Drop functionality, create the necessary code to consume the APIs from Java and JavaFX applications.
Integration with Kenai.com will allow developers to stay in the IDE to create projects in the cloud; get sources from Kenai projects; and query, open and edit issues for them using Bugzilla. NetBeans IDE users can stay connected with other team members with an integrated chat, Kenai's user profiles, wikis, and mailing lists. Learn more about Project Kenai at http://www.kenai.com/.
Deividson Luiz Okopnik
Deividson was born in União da Vitória, PR, Brazil, on 14/04/1984. He became interested in computing when he was still a kid, and started to code when he was 12 years old. He is a graduate in Information Systems and is finishing his specialization in Networks and Web Development. He codes in several languages, including C/C++/C#, PHP, Visual Basic, Object Pascal and others.
Deividson works in Porto União's Town Hall as a Computer Technician, and specializes in Web and Desktop system development, and Database/Network Maintenance.
Howard Dyckoff is a long term IT professional with primary experience at
Fortune 100 and 200 firms. Before his IT career, he worked for Aviation
Week and Space Technology magazine and before that used to edit SkyCom, a
newsletter for astronomers and rocketeers. He hails from the Republic of
Brooklyn [and Polytechnic Institute] and now, after several trips to
Himalayan mountain tops, resides in the SF Bay Area with a large book
collection and several pet rocks.
Howard maintains the Technology-Events blog at
blogspot.com from which he contributes the Events listing for Linux
Gazette. Visit the blog to preview some of the next month's NewsBytes
Howard Dyckoff is a long term IT professional with primary experience at Fortune 100 and 200 firms. Before his IT career, he worked for Aviation Week and Space Technology magazine and before that used to edit SkyCom, a newsletter for astronomers and rocketeers. He hails from the Republic of Brooklyn [and Polytechnic Institute] and now, after several trips to Himalayan mountain tops, resides in the SF Bay Area with a large book collection and several pet rocks.
Howard maintains the Technology-Events blog at blogspot.com from which he contributes the Events listing for Linux Gazette. Visit the blog to preview some of the next month's NewsBytes Events.