We are planning a software upgrade on February 23, 2016 to the news server software that manages messages and headers. This upgrade enables our news servers to display the full depth of headers going back 7+ years to match our message retention.
As you may know or have experienced, some very active newsgroups do not display headers as far back as we have messages. (We could only display a finite number of message headers due to the variable field size in our message database.) Going forward, we can now display headers as far back as we have messages.
The upgrade will cause all the headers to be re-sequenced. Practically, what this means is that if you store headers locally on your PC, the headers retrieved up to now will no longer point to the expected message.
This should NOT affect the ability to use an NZB to retrieve messages.
If you store headers locally on your PC, you should purge them. In most Usenet client software, you can purge headers by unsubscribing from a newsgroup and then resubscribing.
If you do not store headers, no action is needed.
If you continue to use the old locally stored headers in your client after February 23rd, you will likely receive errors like “message not found” or “message out of server’s retention”.
Thee Internet Archive today announced a massive milestone for its Wayback Machine: 400 billion indexed webpages. The data encompasses the Web as it looked anytime from late 1996 up until a few hours ago.
To celebrate the milestone, the Internet Archive has provided a list of The Wayback Machine highlights over the years on USENET newsgroups:
2001 – The Wayback Machine launches.
2006 – Archive-It launches, allowing libraries that subscribe to the service to create curated collections of Web content.
March 25, 2009 – The Internet Archive and Sun Microsystems launch a new datacenter that stores the whole Web archive and serves the Wayback Machine. This 3 petabyte data center handled 500 requests per second from its home in a shipping container.
June 15, 2011 – The HTTP Archive becomes part of the Internet Archive, adding data about the performance of websites to the collection of website content.
May 28, 2012 – The Wayback Machine is available in China again, after being blocked for a few years without notice.
October 26, 2012 – the Internet Archive makes 80 terabytes of archived Web crawl data from 2011 available for researchers, to explore how others might be able to interact with or learn from this content.
October 2013 – New features for the Wayback Machine are launched, including the ability to see newly crawled content an hour after it’s archived, a “Save Page” feature so that anyone can archive a page on demand, and an effort to fix broken links on the Web starting with WordPress.com and Wikipedia.org.
Also in October 2013 – The Wayback Machine provides access to important Federal Government sites that go dark during the Federal Government Shutdown.
Onwards and upwards! Will The Way Back Machine have 500 billion webpages indexed by 2015? We wouldn’t be surprised if it happened sooner.
USENET still considerably pre-dates any of the milestones of the Wayback Machine but is many of the USENET community are proud to be part of the evolution that has occurred from its USENET roots.
A post on technology USENET newsgroups details the announcement of the Japanese team at Chuo University that have made a breakthrough in SSD technology which will make a great drive all the better. The team has found a software/firmware solution for the major drawback inherit in all SSDs.
This could enable high-end devices to easily reach transfer speeds of 1.5GB/s as current models achieve around 500MB/s typically; 60% less power was also used in the lab tests due to the lack of additional drive writes.
The team has overcome the issue by changing the middleware that controls storage for database applications. The new method uses a “logical block address scrambler” which basically stops data being written to a new page and places it in a block to be erased in the next sweep. That means fewer pages, less copying and ultimately a better drive.
Current NAND flash drives can be adapted to work in this way meaning 55 per cent fewer write and erase cycles, extending the device’s life. Since the changes are so small but have such a huge effect we’d expect to see them appear very soon.
Data caps for US ThunderNews customers may be a reality again from their ISP. A Comcast executive said that he is confident the company will roll out usage-based data pricing nationwide once it completes a series of “robust” trials it is currently conducting in several markets. Speaking at the MoffettNathanson Media & Communications Summit today in New York City, David Cohen, executive vice president of Comcast, said that the company is moving slowly with its usage-based data trials to avoid alienating consumers. “We don’t want to blow up our high-speed data business,” he was stated as saying which was posted on USENET newsgroups.
Under such a model, Cohen said, Comcast customers would be allotted a specific amount of bandwidth that’s included in their monthly charge – say, 300GB – and they would pay in increments for anything after that.
The reason they haven’t done so already? They’re still working out exactly where they can cap things before they start getting phone calls — that is, before people start calling up to cancel. Meanwhile, making things more complicated tends to scare people away, so they don’t want to just offer up multiple plans/tiers — so before they make any changes, they need to find that plan that works for almost everyone.
The last time Comcast made a big change to its data plans was two years ago, getting rid of its controversial 250GB monthly data cap, in place of the 300GB plans. Those who go over the monthly allotment were originally threatened to have their service suspended for a year, though Comcast has since started charging for extra chunks of data.
Symantec, maker of the widely used Norton Antivirus software suite, has declared that antivirus technology “is dead”.
The company’s senior vice president of information security Brian Dye has been quoted on USENET that hackers were not only finding new ways to break into computers but that antivirus wasn’t “a moneymaker in any way.”
In perspective, the report seeks to highlight the competitive climate that Symantec currently finds itself in, despite having pioneered computer security several decades ago. Indeed, revenue has fallen in each of the past two quarters, culminating in the company firing its CEO–the second time it has done so in two years.
Not everyone agrees however, and Kaspersky Lab CEO Eugene Kaspersky has hit back with a strongly worded statement that was posted to USENET Newsgroups. According to Kaspersky, security is a combination of various technologies that includes heuristics, sandboxing, cloud protection among others, and includes signature-based antivirus detection.
“I’ve heard antiviruses being declared dead and buried quite a few times over the years, but they’re still here with us–alive and kicking,” said Kaspersky. “I fully agree that single-layer signature-based virus scanning is nowhere near a sufficient degree of protection–not for individuals, not for organizations large or small; however, that’s been the case for many years.”
To be fair, Symantec began to move beyond malware long ago. Its Norton security suite has long included a password manager and code that detects malicious e-mails and Web links. Heuristic algorithms also attempt to detect malicious files even when they have never been seen before. But increasingly, Symantec is competing against its newer rivals by matching the suite of non-AV services they provide.
We at ThunderNews are not sold on the Symantec claim and advise all our customers to keep their anti-virus solutions installed and up to date.
There was a time, in computing’s not-so-distant past, where magnetic tape was the best way to back up large amounts of data. In the mid-90s, tape could store tens or hundreds of gigabytes, while hard drive capacities were still mostly measured in megabytes. That would soon change, of course, with the advent of writable optical media and cheap, large hard drives, but even today tape drives still hang around as one of the best options for mass data backup. Now, Sony has developed a new technology that pushes tape drives far beyond where they once were, leading to individual tapes with 185 terabytes of storage capacity.
Back in 2010, the standing record for how much data magnetic tape could store was 29.5GB per square inch. To compare, a standard dual-layer Blu-ray disc can hold 25GB per layer — this is why big budget, current-gen video games can clock in at around 40 or 50GB. That, however, is an entire disc, whereas magnetic tape could store more than half of that capacity in one little square inch. Currently on technology newsgroups, Sony has announced that it has developed a new magnetic tape material that demolishes the previous 29.5GB record, and can hold a whopping 148GB per square inch, making it the new record holder of storage density for the medium. If spooled into a cartridge, each tape could have a mind-boggling 185TB of storage. Again, to compare, that’s 3,700 dual-layer 50GB Blu-rays (a stack that would be 14.3 feet or 4.4 meters high, incidentally). In fact, one of these tapes would hold five more terabytes than a $9,305 hard drive storage array.
In order to create the new tape, Sony employed the use of sputter deposition, which creates layers of magnetic crystals by firing argon ions at a polymer film substrate. Combined with a soft magnetic under-layer, the magnetic particles measured in at just 7.7 nanometers on average, able to be closely packed together.
Perhaps surprisingly, storage tape shipments grew 13% two years ago, and were headed for a 26% growth just last year. Sony also stated that it would like to commercialize the new material — as well as continue developing its sputter deposition methods — but did not say if or when it will ever happen. While 185TB of storage sitting on a single cartridge is extremely appealing for people with large digital collections — music, games, or really any kind of media — it’s best to remember that the storage medium of tape has never been easy access. Read and write times feel like (and often are) an oblivion, and tape is used mainly for safe-keeping backup, rather than because you have too much music on your SSD and want to free up space for a new game. Still, when it comes to massive, non-time-sensitive storage, tape storage libraries are still one of the most common methods used by big corporations and even USENET newsgroup users and providers.
On May 4, Sony will present the new material to an audience at the international magnetics conference, Intermag Europe 2014.
Last week, we learned about a potential security exploit called “Heartbleed” on some websites that use an SSL to secure customer information.
The problem affects a piece of software called OpenSSL, used for security on popular web servers. With OpenSSL, websites can provide encrypted information to visitors, so the data transferred (including usernames, passwords and cookies) cannot be seen by others while it goes from your computer to the website.
This is what’s important to know: Heartbleed is critical. It affected nearly two-thirds of the Internet and many large Internet companies have been working long hours to update their services to keep customers and visitors safe.
Since we’ve learned about the vulnerability, we’ve been updating ThunderNews services that use the affected OpenSSL version. This includes the servers we use as well as the USENET servers our customers use.
There is no aditional steps our customers need to make in order to be safe from the Heartbleed exploit. Our steps to ensure that customers information is secure have been taken and will continue to monitor the situation as needed.
Good news for Android users: You can now access your PC desktop and your USENET newsreader computer directly via your smartphone or tablet.
Google announced the launch of Chrome Remote Desktop app for Android this week, which lets you access files sitting on your home PC or Mac even when you’re nowhere near it.
The move builds off its Chrome Remote Desktop app launched in 2011, which let users remotely access a desktop from another laptop or computer. The service is free — a stark contrast with costly remote-desktop software such as Parallels Access.
After downloading the Android app from Google Play, you’ll need to install the Chrome Remote Desktop extension in a desktop’s Chrome browser to connect the two systems. Then, grant access for the remote connection to work and set up a PIN code for the PC.
The PC name will then appear in the Chrome Remote Desktop page (and need to be selected) before plugging in the same PIN code within the app. Then, you’re good to go.
Microsoft also has its own remote desktop app client, as does Amazon Workspaces, which lets employees access work computers from their personal devices.
Move over, eSATA. Corning’s new optical USB 3.0 cables are finally on sale as discussed on popular newsgroups, and they can move data faster than you could ever hope to. Almost twice as fast, as a matter of fact. eSATA peaks at about 3Gbps, while Corning’s USB3.Optical cables can achieve throughput of up to 5Gbps.
Better still, they’re capable of doing it over distances of 30 meters. That’s not quite as good as Corning’s Thunderbolt 2 version, which can handle runs of 100 meters, but USB 3.0 ports are a whole lot more common. The big downside here is that retail pricing for the USB3.Optical cables starts at about $109.99.
Price is one major reason optical cables haven’t taken off with consumers, but it certainly won’t deter professionals who work with massive files that are stored on external devices. Things like raw 1080p video and massive data sets can move at an absolutely blistering pace over Corning’s cables.
At 5Gbps, USB3.Optical cables max out the USB 3.0 spec. USB 3.1 was finalized last August, however, and it raises the speed limit to a whopping 10Gbps. Corning hasn’t commented on whether the current batch of cables will be able to keep up with USB 3.1 controllers, but they’ll at least be compatible — and it’s not like 5Gbps is slow or anything.
Still, the additional 5Gbps would provide the kind of speed necessary for USENET users working with uncompressed 4K video stored on enterprise-grade RAID devices. It seems unlikely that Corning — who first showed off the 5Gbps cables more than a year ago — won’t be ready for the big debut of SuperSpeed USB 10Gbps.
Seagate Technology is now shipping what it claims is the world’s fastest 6TB hard disk drive designed for hard core users and data centers.
Version 4 of the Seagate Enterprise Capacity 3.5 HDD delivers supersize storage and enterprise reliability to meet the explosive growth of data being processed.
Seagate said it enables faster data transfers by building on an eighth-generation platform that enables the drive to deliver up to a 25% increase in performance over other 6TB drives.
By utilizing the latest generation of 12Gbps SAS, Seagate said the drive provides customers with the scalability for future-proofing their systems.
It is also available in an enterprise-ready SATA 6Gb/s interface for easy system integration.
The drive is a self-encrypting drive with instant secure erase for easy drive disposal or repurposing and FIPS SED security options.
Seagate’s VP of marketing Scott Horn said unstructured data growth is doubling exponentially.
“This will cause cloud service providers to look for innovative ways to store more within an existing footprint while lowering operational costs,” Horn said.
The drive is currently shipping globally.