Top

Intel’s 14th-gen ‘Raptor Lake Refresh’ Processors Offer up to 6GHz Clock Speeds, Wi-Fi 7, AI-guided Overclocking, and More

  • Intel’s 14th-gen desktop processors offer up to 24 cores, 32 threads, and up to 6GHz peak clock speed.
  • The new Intel 14th-gen “Raptor Lake Refresh” processors offer enhanced gaming performance, AI-guided overclocking support, and advanced connectivity features like Wi-Fi 7, Thunderbolt 4, and 5.
  • The processors are backward compatible with 600/700 series motherboards and will be available in retail and via OEM partners from starting October 17, 2023, starting at $294.

Intel has launched its latest 14th-generation desktop processors, with the portfolio headlined by the Core i9-14900K – the first mass-produced processor with up to 6GHz clock speed out of the box. Like the 13th-generation Raptor Lake, the new “Raptor Lake Refresh” is made on an Intel 7 (10nm) process node, using the same die, with differences coming in the core count, higher clock speeds, advanced connectivity options and AI assist feature among others.

There are six new desktop processors in the family, including the Core i9-14900K/KF, the Core i7-14700K/KF, and the Core i5-14600K/KF. The K models come with an integrated GPU, while the KF models do not feature any integrated GPU and are slightly more affordable. Available starting October 17, these processors maintain the same pricing as last year’s processors.

Talking about Intel’s 14th-gen desktop processors, Counterpoint Research Senior Analyst William Li, who tracks the PC market said, “Intel’s new desktop CPU solutions could boost not only hardware capabilities to deal with incremental computing power requirements but also improve user experience with Intel Application Optimization (APO) technology. Although this time the Core 14th-gen desktop processors do not have huge upgrades as on laptop platforms, gamers and creators can still enjoy better computing performance without compromising user workflow with Intel’s solid achievement on overclocking. We believe that the global PC market has bottomed out in H2 2023 and will likely see a significant rebound in the next year largely due to the Windows 11 replacement cycle and Artificial Intelligence (AI) PC momentum.”

Intel 14th-gen ‘Raptor Lake Refresh’ Processors: Specifications

 Aimed at gamers and creators, the Intel Core i9-14900K/KF offers clock speeds of up to 6GHz. The processor has a 24-core CPU, including eight Performance (P) cores with a max Turbo frequency of up to 5.6GHz and 16 Efficiency (E) cores with a max Turbo frequency of up to 4.4GHz, and has a total of 32 processor threads. It also includes a 36MB L3 cache and 32MB L2 cache.

Meanwhile, the Intel Core i7-14700K/KF has a 20-core CPU, with eight P-cores, with a max Turbo frequency of up to 5.5GHz, and 12 E-cores (up from eight cores in the previous gen), with a max Turbo clock speed of up to 4.3GHz and has a total of 28 threads. It also comes with a 33MB L3 cache and 28MB L2 cache.

counterpoint intel 14th gen raptor lake refresh specifications
Source: Intel

Lastly, the Intel Core i5-14600K has a 14-core CPU, with six P-cores having a max Turbo frequency of up to 5.3GHz, and eight E-cores having a max Turbo clock speed of up to 4.0GHz and, has a total of 20 threads. It also comes with a 24MB L3 cache and 20MB L2 cache.

All six processors in the Intel 14th-gen family offer a max memory speed of up to 5600MT/s (DDR5), and up to 3,200MT/s (DDR4). The K models also include Intel UHD Graphics 770, with dynamic frequency between 1,650MHz to 1,550MHz, depending on the model.

In terms of backward compatibility, all the new processors will work with Intel’s 600- and 700-series motherboards that use LGA 1700 sockets, thus enabling easier upgrades for users.

AI Overclocking, Connectivity Enhancements, and more

One of the standout features of the latest 14th-gen processors is the new AI Assist feature in the Intel Extreme Tuning Utility (XTU). It provides users with one-click AI-guided overclocking on select unlocked desktop processors. As users receive step-by-step instructions, even someone without experience can easily overclock the processors. It signifies Intel’s commitment to delivering top-tier performance for desktop enthusiasts.

There are also new gaming-focused features like Intel Application Optimization (APO) that enhance application threading to optimize speed and frame rate for smooth and consistent gaming performance. It is enabled by default, but users can also disable it, or enable it only for certain games. Intel also mentioned that the APO is only for games and does not work with benchmarks. Then there is also the Intel Thread Director which optimizes application thread scheduling.

counterpoint intel 14th gen raptor lake refresh connectivity
Source: Intel

But that’s not all – currently, most people work in a hybrid environment, and mostly via calls on different platforms. The AI Boost feature allows users to make video calls keeping them in focus while blurring distractions from the background. The AI can also reduce ambient background noise, focusing on the user’s voice to offer crisp and clear sound on calls.

Counterpoint Research Senior Analyst Akshara Bassi, who tracks HPCs, Cloud, and Server market said, “Intel has introduced AI features for its Extreme Tuning Facility (XTU) that help in AI-assisted overclocking for the 14th-gen processors (limited SKUs) and through the Application Optimization Program, it has introduced automatic thread performance while gaming.”

Connectivity is another area where the new desktop processors shine, with integrated support for Wi-Fi 6/6E and Bluetooth 5.3, as well as discrete support for Wi-Fi 7 and Bluetooth 5.4. The new processors also include support for Thunderbolt 4 and the upcoming Thunderbolt 5 wired connectivity, offering up to 80 Gbps of bi-directional bandwidth.

“From a technical perspective, the Intel Core-i9 14900K is the first chip in volume to hit 6Ghz, with support for Wi-Fi 7 and Bluetooth 5.4,” Bassi added.

In terms of pricing, the premium Core i9-14900K will be available for $564, the Core i7-1700K for $384, and the Core i5-14600K will start at $294.

Related Posts

Guest Post: OpenAI Haymaker?

OpenAI makes hay while the sun shines.

When a company that has issues with making profits can raise money at a valuation of $85 billion, it becomes abundantly clear that investors in generative AI have taken leave of their senses. 

Open AI is reportedly raising money at a valuation of $80 billion to $90 billion. This looks like an opportunistic event for two reasons. 

First, doubts over whether Open AI actually needs the money. It was only nine months ago that Microsoft invested $10 billion in OpenAI, meaning that if it has run out of money already, then it has a cash burn of $1.1 billion per month. This is Reality Labs’ levels of cash burn which with 400 employees amounts to $2.75 million per employee per month.  

The vast majority of this spend will be going to compute costs where even with 100 million users making 30 requests per day this is an uneconomic level of spending. This would mean that ChatGPT and generative AI generally can never become a viable business or generate a positive ROI and so one suspects that OpenAI has in fact got plenty of money left. 

Second, virtually free money. In the market’s mind, OpenAI is the leading generative AI company in the world (which is debatable). Furthermore, generative AI is the hottest theme in the technology sector by a wide margin, meaning that OpenAI sits at the pinnacle of what the market wants to own. This in turn means that OpenAI can sell far fewer shares for the money it wants to raise, and its existing shareholders can also register large unrealized gains on their balance sheets. Consequently, I think that this raise is opportunistic in that the market has given OpenAI an opportunity to capitalize on its fame and popularity. 

However, most telling of all is that employees will also have an opportunity to sell some of their shares as part of this transaction. Insider stock sales are often an indicator of the insiders’ view that the valuation of the shares has hit a peak. At $85 billion, this is pretty hard to argue against. 

OpenAI is supposed to earn revenues of $250 million this year and $1 billion next year, putting the shares on over 80x 2024 revenues. This is very high even in the best of times, but the plethora of start-ups and the thousands of models being made available for free by the open-source community leads one to think that competition is on the way. 

Hence, price erosion is likely which in turn could lead to OpenAI missing the $1-billion revenue estimate for 2024 and burning through even more cash than expected. OpenAI will not be alone, and many start-ups will suffer from price erosion that will cause their targets to be missed. This could well be the pin that pricks the current bubble, causing enthusiasm to wane and valuations to fall. 

OpenAI may not be worth $85 billion but the timing of the raise is perfect.

(This guest post was written by Richard Windsor, our Research Director at Large.  This first appeared on Radio Free Mobile. All views expressed are Richard’s own.) 

Related Posts

Podcast #70: Qualcomm Driving On-device Generative AI to Power Intelligent Experiences at the Edge

Generative AI like ChatGPT and Google’s Bard have disrupted the industry. However, they are still limited to browser windows and smartphone apps, where the processing is done through cloud computing. That is about to change soon as Qualcomm Snapdragon-powered devices will soon be able to run on-device generative AI.

At MWC 2023, Qualcomm showcased Stable Diffusion on a Snapdragon 8 Gen 2-powered Android smartphone. The demo showed how a smartphone can generate a new image with text commands or even change the background, without connecting to the internet. Running generative AI apps directly on a device offers several advantages, including lower operational costs, better privacy, security, and reliability of working without internet connectivity.

ALSO LISTEN: Podcast #69: ChatGPT and Generative AI: Differences, Ecosystem, Challenges, Opportunities

In the latest episode of ‘The Counterpoint Podcast’, host Peter Richardson is joined by Qualcomm’s Senior Vice President of Product Management Ziad Asghar to talk about on-device generative AI. The discussion covers a range of topics from day-to-day use cases to scaling issues for computing resources and working with partners and the community to unlock new generative AI experiences across the Snapdragon product line.

Click the play button to listen to the podcast

You can read the transcript here.

Podcast Chapter Markers

01:35: Ziad starts by defining generative AI and comparing it with machine learning and other types of AI.

03:56: Ziad talks about AI experiences that are already present in Snapdragon-powered devices.

06:24: Ziad addresses the scaling issue for computing resources used to train large language models.

09:46: Ziad deep dives into the types of day-to-day applications for generative AI on devices like a smartphone.

13:34: Ziad talks about the hybrid AI model, involving both cloud interaction and edge.

15:43: Ziad on how Qualcomm is leveraging its silicon chip capabilities to unlock generative AI experiences.

19:20: Ziad on how Qualcomm is working with its ecosystem and the developer community.

21:57: Ziad touches on the privacy and security aspect with respect to on-device generative AI.

Also available for listening/download on:

 

Podcast #69: ChatGPT and Generative AI: Differences, Ecosystem, Challenges, Opportunities

Generative AI has been a hot topic, especially after the launch of ChatGPT by OpenAI. It has even exceeded Metaverse in popularity. From top tech firms like Google, Microsoft and Adobe to chipmakers like Qualcomm, Intel, and NVIDIA, all are integrating generative AI models in their products and services. So, why is generative AI attracting interest from all these companies?

While generative AI and ChatGPT are both used for generating content, what are the key differences between them? The content generated can include solutions to problems, essays, email or resume templates, or a short summary of a big report to name a few. But it also poses certain challenges like training complexity, bias, deep fakes, intellectual property rights, and so on.

In the latest episode of ‘The Counterpoint Podcast’, host Maurice Klaehne is joined by Counterpoint Associate Director Mohit Agrawal and Senior Analyst Akshara Bassi to talk about generative AI. The discussion covers topics including the ecosystem, companies that are active in the generative AI space, challenges, infrastructure, and hardware. It also focuses on emerging opportunities and how the ecosystem could evolve going forward.

Click to listen to the podcast

Click here to read the podcast transcript.

Podcast Chapter Markers

01:37 – Akshara on what is generative AI.

03:26 – Mohit on differences between ChatGPT and generative AI.

04:56 – Mohit talks about the issue of bias and companies working on generative AI right now.

07:43 – Akshara on the generative AI ecosystem.

11:36 – Akshara on what Chinese companies are doing in the AI space.

13:41 – Mohit on the challenges associated with generative AI.

17:32 – Akshara on the AI infrastructure and hardware being used.

22:07 – Mohit on chipset players and what they are actively doing in the AI space.

24:31 – Akshara on how the ecosystem could evolve going forward.

Also available for listening/download on:

 

Guest post: AI Business Model on Shaky Ground

OpenAI, Midjourney and Microsoft have set the bar for chargeable generative AI services with ChatGPT (GPT-4) and Midjourney costing $20 per month and Microsoft charging $30 per month for Copilot. The $20-per-month benchmark set by these early movers is also being used by generative AI start-ups to raise money at ludicrous valuations from investors hit by the current AI FOMO craze. But I suspect the reality is that it will end up being more like $20 a year.

To be fair, if one can charge $20 per month, have 6 million or more users, and run inference on NVIDIA’s latest hardware, then a lot of money can be made. If one then moves inference from the cloud to the end device, even more is possible as the cost of compute for inference will be transferred to the user. Furthermore, this is a better solution for data security and privacy as the user’s data in the form of requests and prompt priming will remain on the device and not transferred to the public cloud. This is why it can be concluded that for services that run at scale and for the enterprise, almost all generative AI inference will be run on the user’s hardware, be it a smartphone, PC or a private cloud.

Consequently, assuming that there is no price erosion and endless demand, the business cases being touted to raise money certainly hold water. While the demand is likely to be very strong, I am more concerned with price erosion. This is because outside of money to rent compute, there are not many barriers to entry and Meta Platforms has already removed the only real obstacle to everyone piling in.

The starting point for a generative AI service is a foundation model which is then tweaked and trained by humans to create the service desired. However, foundation models are difficult and expensive to design and cost a lot of money to train in terms of compute power. Up until March this year, there were no trained foundation models widely available, but that changed when Meta Platforms’ family of LlaMa models “leaked” online. Now it has become the gold standard for any hobbyist, tinkerer or start-up looking for a cheap way to get going.

Foundation models are difficult to switch out, which means that Meta Platforms now controls an AI standard in its own right, similar to the way OpenAI controls ChatGPT. However, the fact that it is freely available online has meant that any number of AI services for generating text or images are now freely available without any of the constraints or costs being applied to the larger models.

Furthermore, some of the other better-known start-ups such as Anthropic are making their best services available online for free. Claude 2 is arguably better than OpenAI’s paid ChatGPT service and so it is not impossible that many people notice and start to switch.

Another problem with generative AI services is that outside of foundation models, there are almost no switching costs to move from one service to another. The net result of this is that freely available models from the open-source community combined with start-ups, which need to get volume for their newly launched services, are going to start eroding the price of the services. This is likely to be followed by a race to the bottom, meaning that the real price ends up being more like $20 per year rather than $20 per month. It is at this point that the FOMO is likely to come unstuck as start-ups and generative AI companies will start missing their targets, leading to down rounds, falling valuations, and so on.

There are plenty of real-world use cases for generative AI, meaning that it is not the fundamentals that are likely to crack but merely the hype and excitement that surrounds them. This is precisely what has happened to the Metaverse where very little has changed in terms of developments or progress over the last 12 months, but now no one seems to care about it.

(This guest post was written by Richard Windsor, our Research Director at Large.  This first appeared on Radio Free Mobile. All views expressed are Richard’s own.) 

Related Posts

MediaTek to Focus on Automotive, Edge AI for Growth

  • The company saw a slight growth in Q2 revenues due to the improving demand for 5G SoCs.
  • Inventory came down to a relatively normal level.
  • MediaTek and NVIDIA have tied up to develop a full-scale product roadmap for the automotive industry.
  • Significant revenues are expected to be seen for MediaTek’s auto and custom ASIC segments from 2026.

 

[rev_slider alias=”mediatek-earnings”][/rev_slider]

MediaTek’s revenues were slightly up sequentially but down 43% annually in Q2 2023. Inventory has gradually come down to a relatively normal level, but the demand for smartphones will remain slow due to the global macroeconomic situation and the refurbished smartphone market. Against this backdrop, MediaTek is diversifying its portfolio by focusing on the auto, smart edge and custom ASIC segments. The company is estimated to take over two years to get material revenues from these segments.

AI and ASIC Opportunity

CEO: “As for ASIC, we recently see growing enterprise ASIC business opportunities in AI and datacenter markets. With our strong IP and SoC integration capabilities, we aim to continue to grow this business in the future.”

Parv Sharma’s analyst take: “With the growth in generative AI, the demand for edge AI processing has accelerated. Being one of the top players in edge devices, MediaTek is well-positioned to benefit from this shift. The company will focus on winning enterprise ASIC projects but catching up with major players like Broadcom and Marvell will take time, as customers typically work with existing suppliers for repeat projects.”

Growing focus on auto and partnership with NVIDIA

CEO: “We’re very excited about the recently announced partnership between MediaTek and NVIDIA to develop a full-scale product roadmap for the automotive industry. We believe our industry-leading low-power processors and 5G, WiFi connectivity solutions, combined with NVIDIA’s strong capability in software and AI cloud, will help us become highly competitive in the future connected software-defined vehicles market and shorten our time to market to accelerate our growth.”

Shivani Parashar’s analyst take: “MediaTek launched Dimensity Auto to focus on cockpit and connectivity solutions. With its partnership with NVIDIA, the company aims to develop a full-scale product roadmap for the automotive industry. Auto design cycles are long so it will take some time (2026-2027) for the company to increase revenues from this segment. Overall, we can say the auto segment will become a long-term revenue growth driver for MediaTek.”

Customer and channel inventories come down

CEO: “We observed that customer and channel inventories across major applications have gradually reduced to a relatively normal level. Recent demand from our customers has shown certain level of stabilization. However, our customers are still managing their inventory cautiously as global consumer electronics end market demand remains soft. For the near-term, we expect our business to gradually improve in the second half of the year”

Shivani Parashar’s analyst take: “According to our supply chain checks, inventory levels are coming down and will get back to normal in the second half of 2023. OEMs will start restocking but will be cautious due to weak consumer demand and global macroeconomic conditions.”Mediatek revenuesResult summary

  • Slight improvement in revenues: MediaTek recorded $3.2 billion in revenues in Q2 2023, a slight increase of 2% QoQ but a decrease of 43% YoY due to the weak global demand for end products and the second-hand smartphone market. Customer and channel inventories across major applications have come down to a relatively normal level.
  • Maintained mobile segment revenue due to 5G SoCs: The mobile phone segment contributed 46% to the company’s revenue in Q2 2023, which declined by 51% YoY and increased by 2% QoQ. The demand for 5G SoCs improved during the quarter. The new flagship Dimensity SoC will be launched in the coming month.
  • New opportunities for smart edge: The smart edge segment contributed 47% to the company’s revenue in Q2, growing 2% sequentially. The demand for connectivity remained stable in the quarter. Business opportunities are growing for the ASIC segment.
  • Price discipline: MediaTek will focus on maintaining gross margin, following price discipline at a time of uncertainty in the global semiconductor industry.
  • Favorable guidance: MediaTek guided Q3 revenues in the range of $3.3 to $3.5 billion, growing 4%-11% sequentially. Gross margins are expected to be around 47% while the operating expense ratio is expected to be around 32% in Q2 2023. The smartphone, connectivity and PMIC segments will see revenue growth. The smart TV segment will witness declining revenues in the third quarter due to excess inventory.
  • Auto segment is picking up: Automotive will contribute $200 to $300 million to MediaTek’s revenue in 2023. More significant revenue can be seen from 2026. The current auto design pipeline revenue for MediaTek is over $1 billion.

Related posts

TSMC Bullish on AI in Long Term

Weaker-than-expected macroeconomic situation continued to weigh on TSMC’s Q2 2023 business performance. Muted smartphone and PC/NB demand negatively impacted the overall utilization rate during the quarter. Though largely expected by the market, the company further cut its fullyear revenue guidance on the weaker end demand expected for H2 2023. However, TSMC projects a strong AI demand in Q3 2023 and, going forward, sees itself as the key enabler for AI GPUs and ASICs that require a large die size. We give our takes on the key points discussed during the earnings call: 

Is AI semiconductor demand real?

  • Chairman (Mark Liu): Neither can we predict the near future, meaning next year, how the sudden demand will continue or will flatten out. However, our model is based on the data center structure. We assume a certain percentage of the data center processors are AI processors and based on that, we calculate the AI processor demand. And this model is yet to be fitted to the practical data later on. But in general, I think the trend of a big portion of data center processors will be AI processors is a sure thing. And will it cannibalize the data center processors? In the short term, when the capex of the cloud service providers is fixed, yes, it will. It is. But as for the long term, when their data service – when the cloud services have the generative AI service revenue, I think they will increase the capex. That should be consistent with the long-term AI processor demand. And I mean the capex will increase because of the generative AI services.
  • Adam Chang’s analyst take: Supply chain checks reveal that cloud service providers such as Microsoft, Google, and Amazon aggressively invest in AI servers. NVIDIA is continuing to add orders for the A100 and H100 to the supply chain, echoing the strong momentum for AI demand. TSMC holds a significant market share in AI semiconductor wafer production, mitigating the risk of misjudging CoWoS capacity expansion concerning AI demand.
  • Akshara Bassi’s analyst take: Over the medium term, as hyperscalers continue to develop their own proprietary AI models and look to monetize through AI-as-a-Service and simiilar models, the infrastructure demand should remain robust.

Can AI semiconductor demand offset short-term macro weakness?

  • CEO (Che-Chia Wei): Three months ago, we were probably more optimistic, but now it’s not. Also, for example, China economy’s recovery is actually also weaker than we thought. And so, the end market demand actually did not grow as we expected. So put all together, even if we have a very good AI processor demand, it’s still not enough to offset all those kinds of macro impacts. So, now we expect the whole year will be -10% YoY.
  • Adam Chang’s analyst take: Although there is a lot of promise around AI, it would only account for around 6% of total revenues in 2023. Therefore, AI is not a panacea for broader short-term demand weakness.

Is TSMC CoWoS capacity enough to fulfill current AI demand?

  • CEO (Che-Chia Wei): For AI, right now, we see very strong demand, yes. For the front-end part, we don’t have any problem to support, but for the back end, the advanced packaging side, especially for the CoWoS, we do have some very tight capacity to — very hard to fulfill 100% of what customers needed. So, we are working with customers for the short term to help them to fulfill the demand, but we are increasing our capacity as quickly as possible. And we expect these tightening will be released next year, probably toward the end of next year. Roughly probably 2x of the capacity will be added.
  • Adam Chang’s analyst take: Due to TSMC’s CoWoS capacity constraints, the company is finding it challenging to fulfill the strong AI demand from customers,, including NVIDIA, Broadcom, and Xilinx, at the moment. NVIDIA is actively seeking second- source suppliers as TSMC looks to outsource some of its production.

N3E/N3/N2 status

  • CEO (Che-Chia Wei): N3 is already involved in production with good yield. We are seeing robust demand for N3 and we expect a strong ramp in the second half of this year, supported by both HPC and smartphone applications. N3 is expected to continue to contribute mid-single-digit percentage of our total wafer revenue in 2023. Our N2 technology development is progressing well and is on track for volume production in 2025. Our N2 will adopt a narrow sheet transistor structure to provide our customers with the best performance, cost, and technology maturity.
  • Adam Chang’s analyst take: Apple is the sole customer expected to adopt TSMC’s 3nm technology in its A17 Bionic and M3 chips during 2023. The Qualcomm Snapdragon 8 Gen 4 processor is also anticipated to join the TSMC 3nm family (N3E) in 2024. Moreover, Intel is likely to adopt TSMC’s 3nm technology for its Arrow Lake CPU, scheduled to launch in H2 2024. 

Results summary

  • Q2 2023 results beat slightly: TSMC reported $15.67 billion in sales, slightly above the midpoint of guidance. EPS beat consensus due to higher non-operating income. Both GPM and OPM slightly beat guidance thanks to favorable FX and cost control efforts.
  • Q3 2023 guidance in line: The management guided $16.7-$5 billion (+9% QoQ at midpoint), gross margin in the range of 51.5%-53.5%, and operating margin in the range of 38%-40%. The gross margin dilution resulting from the N3 ramp-up would be 2-3 percentage points in Q3 2023 and 3-4 percentage points in Q4 2023. This impact would persist throughout the entire year of 2024, affecting the overall gross margin by 3-4 percentage points. Notably, this dilution is higher than the 2-3 percentage points gross margin dilution experienced during the N5’s second year of mass production in 2021.
  • 2023 revenue guidance revised down but expected: TSMC revised down the full-year revenue guidance to -10% YoY. The management sees weaker-than-expected macroeconomics in H2 2023 affecting the demand for all applications except for AI.
  • Strong AI demand, 50% revenue CAGR forecast: AI revenue currently makes up 6% of TSMC’s total revenue. The company anticipates a remarkable compound annual growth rate (CAGR) of nearly 50% from 2022 to 2027 in the AI sector. As a result of this significant growth, the AI revenue percentage share in TSMC’s total revenue is projected to reach the low teens by 2027.
  • CoWoS capacity expected to double by 2024 end: TSMC is experiencing strong demand in the AI sector, with sufficient capacity for the front-end part but facing challenges in advanced packaging, particularly CoWoS.It is working with customers to meet demand in the short term while rapidly increasing capacity which it expects to double by the end of 2024, easing the current tightness.

Related Posts

5G Advanced and Wireless AI Set To Transform Cellular Networks, Unlocking True Potential

The recent surge in interest in generative AI highlights the critical role that AI will play in future wireless systems. With the transition to 5G, wireless systems have become increasingly complex and more challenging to manage, forcing the wireless industry to think beyond traditional rules-based design methods.

5G Advanced will expand the role of wireless AI across 5G networks introducing new, innovative AI applications that will enhance the design and operation of networks and devices over the next three to five years. Indeed, wireless AI is set to become a key pillar of 5G Advanced and will play a critical role in the end-to-end (E2E) design and optimization of wireless systems. In the case of 6G, wireless AI will become native and all-pervasive, operating autonomously between devices and networks and across all protocols and network layers.

E2E Systems Optimization

AI has already been used in smartphones and other devices for several years and is now increasingly being used in the network. However, AI is currently implemented independently, i.e. either on the device or in the network. As a result, E2E systems performance optimization across devices and network has not been fully realized yet. One of the reasons for this is that on-device AI training has not been possible until recently.

On-device AI will play a key role in improving the E2E optimization of 5G networks, bringing important benefits for operators and users, as well as overcoming key challenges. Firstly, on-device AI enables processing to be distributed over millions of devices thus harnessing the aggregated computational power of all these devices. Secondly, it enables AI model learning to be customized to a particular user’s personalized data. Finally, this personalized data stays local on the device and is not shared with the cloud. This improves reliability and alleviates data sovereignty concerns. On-device AI will not be limited to just smartphones but will be implemented across all kinds of devices from consumer devices to sensors and a plethora of industrial equipment.

New AI-native processors are being developed to implement on-device AI and other AI-based applications. A good example is Qualcomm’s new Snapdragon X75 5G modem-RF chip, which has a dedicated hardware tensor accelerator. Using Qualcomm’s own AI implementation, this Gen 2 AI processor boosts the X75’s AI performance more than 2.5 times compared to the previous Gen 1 design.

While on-device AI will play a key role in improving the E2E performance of 5G networks, overall systems optimization is limited when AI is implemented independently. To enable true E2E performance optimization, AI training and inference needs to be done on a systems-wide basis, i.e.  collaboratively across both the network and the devices. Making this a reality in wireless system design requires not only AI know-how but also deep wireless domain knowledge. This so-called cross-node AI is a key focus of 5G Advanced with a number of use cases being defined in 3GPP’s Release 18 specification and further use cases expected to be added in later releases.

Wireless AI: 5G Advanced Release 18 Use Cases

3GPP’s Release 18 is the starting point for more extensive use of wireless AI expected in 6G. Three use cases have been prioritized for study in this release:

  • Use of cross-node Machine Learning (ML) to dynamically adapt the Channel State Information (CSI) feedback mechanism between a base station and a device, thus enabling coordinated performance optimization between networks and devices.
  • Use of ML to enable intelligent beam management at both the base station and device, thus improving usable network capacity and device battery life.
  • Use of ML to enhance positioning accuracy of devices in both indoor and outdoor environments, including both direct and ML-assisted positioning.

Channel State Feedback:

CSI is used to determine the propagation characteristics of the communication link between a base station and a user device and describes how this propagation is affected by the local radio environment. Accurate CSI data is essential to provide reliable communications. With traditional model-based CSI, the user device compresses the downlink CSI data and feeds the compressed data back to the base station. Despite this compression, the signalling overhead can still be significant, particularly in the case of massive MIMO radios, reducing the device’s uplink capacity and adversely affecting its battery life.

An alternative approach is to use AI to track the various parameters of the communications link. In contrast to model-based CSI, a data driven air interface can dynamically learn from its environment to improve performance and efficiency. AI-based channel estimation thus overcomes many of the limitations of model-based CSI feedback techniques resulting in higher accuracy and hence an improved link performance. The is particularly effective at the edges of a cell.

Implementing ML-based CSI feedback, however, can be challenging in a system with multiple vendors. To overcome this, Qualcomm has developed a sequential training technique which avoids the need to share data across vendors. With this approach, the user device is firstly trained using its own data. Then, the same data is used to train the network. This eliminates the need to share proprietary, neural network models across vendors. Qualcomm has successfully demonstrated sequential training on massive MIMO radios at its 3.5GHz test network in San Diego (Exhibit 1).

Wireless AI
© Qualcomm Inc.

Exhibit 1: Realizing system capacity gain even in challenging non-LOS communication

AI-based Millimetre Wave Beam Management:

The second use case involves the use of ML to improve beam prediction on millimetre wave radios. Rather than continuously measuring all beams, ML is used to intelligently select the most appropriate beams to be measured – as and when needed. A ML algorithm is then used to predict future beams by interpolating between the beams selected – i.e. without the need to measure the beams all the time. This is done at both the device and the base station. As with CSI feedback, this improves network throughput and reduces power consumption.

Qualcomm recently demonstrated the use of ML-based algorithms on its 28GHz massive MIMO test network and showed that the performance of the AI-based system was equivalent to a base case network set-up where all beams are measured.

Precise Positioning:

The third use case involves the use of ML to enable precise positioning. Qualcomm has demonstrated the use of multi-cell roundtrip (RTT) and angle-of-arrival (AoA)-based positioning in an outdoor network in San Diego. The vendor also demonstrated how ML-based positioning with RF finger printing can be used to overcome challenging non-line of sight channel conditions in indoor industrial private networks.

An AI-Native 6G Air Interface

6G will need to deliver a significant leap in performance and spectrum efficiency compared to 5G if it is to deliver even faster data rates and more capacity while enabling new 6G use cases. To do this, the 6G air interface will need to accommodate higher-order Giga MIMO radios capable of operating in the upper mid-band spectrum (7-16GHz), support wider bandwidths in new sub-THz 6G bands (100GHz+) as well as on existing 5G bands. In addition, 6G will need to accommodate a far broader range of devices and services plus support continuous innovation in air interface design.

To meet these requirements, the 6G air interface must be designed to be AI native from the outset, i.e. 6G will largely move away from the traditional, model-driven approach of designing communications networks and transition toward a data-driven design, in which ML is integrated across all protocols and layers with distributed learning and inference implemented across devices and networks.

This will be a truly disruptive change to the way communication systems have been designed in the past but will offer many benefits. For example, through self-learning, an AI-native air interface design will be able to support continuous performance improvements, where both sides of the air interface — the network and device — can dynamically adapt to their surroundings and optimize operations based on local conditions.

5G Advanced wireless AI/ML will be the foundation for much more AI innovation in 6G and will result in many new network capabilities. For instance, the ability of the 6G AI native air interface to refine existing communication protocols and learn new protocols coupled with the ability to offer E2E network optimization will result in wireless networks that can be dynamically customized to suit specific deployment scenarios, radio environments and use cases. This will a boon for operators, enabling them to automatically adapt their networks to target a range of applications, including various niche and vertical-specific markets.

Related Posts:

AI Drives Cloud Player Capex Amid Cautious Overall Spend

  • Cloud service providers’ capex is expected to grow by around 8% YoY in 2023 due to investments in AI and networking equipment.
  • Microsoft and Amazon are among the highest spenders as they invest in data center development. Microsoft will spend over 13% of its capex on AI infrastructure.
  • AI infrastructure can be 10x-30x more expensive than traditional general-purpose data center IT infrastructure.
  • Chinese hyperscalers’ capex is decreasing due to their inability to access NVIDIA’s GPU chips, and decreasing cloud revenues.

New Delhi, Beijing, Seoul, Hong Kong, London, Buenos Aires, San Diego – July 25, 2023

Global cloud service providers will grow capex by an estimated 7.8% YoY in 2023, according to the latest research from Counterpoint’s Cloud Service. Higher debt costs, enterprise spending cuts and muted cloud revenue growth are impacting infrastructure spend in data centers compared to 2022.

Commenting on the large cloud service providers’ 2023 plans, Senior Research Analyst Akshara Bassi said, “Hyperscalers are increasingly focusing on ramping up their AI infrastructure in data centers to cater to the demand for training proprietary AI models, launching native B2C generative AI user applications, and expanding AIaaS (Artificial Intelligence-as-a-Service) product offerings”.

According to Counterpoint’s estimates, around 35% of the total cloud capex for 2023 is earmarked for IT infrastructure including servers and networking equipment compared to 32% in 2022.

Global Cloud Service provider's Capex
Source: Counterpoint Research
2023 Capex Share
Source: Counterpoint Research

In 2023, Microsoft and Amazon (AWS) will account for 45% of the total capex. US-based hyperscalers will contribute to 91.9% of the overall global capex in 2023.

Chinese hyperscalers are spending less due to slower growth in cloud revenues amid a weak economy and difficulties in acquiring the latest NVIDIA GPU chips for AI due to US bans. The scaled-down version – A800 of the flagship A100/H100 chips – that NVIDIA has been supplying to Chinese players may also come under the purview of the ban, further reducing access to AI silicon for Chinese hyperscalers.

Global Cloud Service Provider's AI spends as % of Total Capex, 2023
Source: Counterpoint Research

Based on Counterpoint estimates, Microsoft will spend proportionally the most on AI-related infrastructure with 13.3% of its capex directed towards AI, followed by Google at around 6.8% of its capex. Microsoft has already announced its intention to integrate AI within its existing suite of products.

AI infrastructure can be 10x-30x more expensive than traditional general-purpose data center IT infrastructure.

Though Chinese players are investing a larger portion of their spends towards AI, the amount is significantly less than that of the US counterparts due to a lower overall capex.

 The comprehensive and in-depth ‘Global Cloud Service Providers Capex’ report is available. Please contact Counterpoint Research to access the report.

Background

Counterpoint Technology Market Research is a global research firm specializing in products in the technology, media and telecom (TMT) industry. It services major technology and financial firms with a mix of monthly reports, customized projects, and detailed analyses of the mobile and technology markets. Its key analysts are seasoned experts in the high-tech industry.

Analyst Contacts

Akshara Bassi

 

Peter Richardson

      

 Neil Shah

 

Follow Counterpoint Research

press@counterpointresearch.com

Related Posts

How Far Has Technology Come in 20 Years?

Source: Created with Stable Diffusion

Twenty years ago, I was an equity analyst for a Wall Street investment bank. At the time, my research director liked to get all the analysts to write occasional thought pieces. In the following article written in June 2003, I chose to write a speculative piece that looked back to 2003 from five years in the future, i.e. 2008. I speculated that there would be quite a few technological leaps in the five intervening years.

Given the 20 years that have now passed since I wrote the article, how many of those technologies have actually come into being? As you will see, not many, while others that were not foreseen have matured – for example, app-based smartphones and music streaming.

Without specifically naming it as artificial intelligence, I foresaw a role for cloud-based intelligent software agents that would provide intuitive assistance in multiple situations – a true digital assistant. These have not come into being and they are not even much discussed. We do have digital assistants such as Apple’s Siri, Google Assistant or Amazon’s Alexa, but they are mostly incapable of anything more than answering simple questions and certainly couldn’t be trusted to book travel tickets, make restaurant reservations or update other people’s diaries. While ChatGPT and derivatives of Large Language Models seem superficially smarter, they are still not yet at the stage of being able to function as a general assistant.

One other technology referenced in the article that is still far from maturity, is augmented reality. The glasses described were not too far-fetched – Microsoft’s HoloLens can achieve some of what is described and Epson and Vuzix, for example, have developed glasses that are in use by field service engineers. But these products are not able to reference real-world objects. Apple’s forthcoming Vision Pro, while technically brilliant, would not be a suitable solution for the use case described.

At the end of the article, I listed companies that I expected to be playing a significant role in the development of the various technologies highlighted. But where are those companies now?

For context, and for the younger readers, around the turn of this century, third-generation cellular licenses had been expensively auctioned in several countries and many mobile operators were struggling to generate a return on their investment. Oh, how things have changed (or not)! As an analyst covering mobile technology, I could see that investors were valuing mobile operators solely on their voice and text revenues, with zero value being ascribed to future data revenues. My article was also an attempt to awaken investors to the potential value beyond voice.

Anyway, here’s the report that I wrote in mid-2003. It was written as though it were an article in a business newspaper.

Special Report – June 2008

Connected People

It is just eight years since European wireless telecom companies became the subject of outright derision for spending billions of dollars on licenses to operate third-generation cellular networks. Now the self-same companies have become core to our everyday existence. Their stock, which bottomed in the middle of 2002, has risen steadily ever since.

The original promise of 3G technology was high-speed data networking coupled with an exceptional capacity for both voice and data. But critics said that it was an innovation users didn’t need, want or would be willing to pay for.

When the first commercial 3G networks appeared in 2003 and faltered at the first step, the doubters started to look dangerously like they had a point. But the universe is fickle and within the last two or three years, the combination of maturing networks and the inevitable power of Moore’s Law has started to deliver wireless devices and applications that would have been thought of, if not as science fiction, then at least science-stretching-the-bounds-of-credibility, when the licenses were issued.

However, while the long-time infamy of 3G means it is taking the starring role as industry watchers chart the chequered history of the technology, it is the supporting cast of technologies that has really delivered the goods. Without them, 3G would have remained just another method to access the backbone network.

The following snapshots from one perfectly ordinary day last month show how the coordinated application of a whole slew of technologies has subtly but distinctly altered our lives.

Bristol – May 1, 2008, 12:57 pm

Beads of sweat form on the face of Jim McKenna, a 24-year-old technician, as he studies the guts of a damaged generator. McKenna is a member of a rapid response team, looking after mission-critical power generation facilities across Southern England.

“Dave, I’ve located the damaged circuits, I think I can repair it, but the control unit is non-standard and I’ve not seen one like it before. Can you help me out here?”

McKenna’s voice is picked up by a tiny transducer microphone embedded in a Bluetooth-enabled hands-free earbud. The bud is so small it nestles unobtrusively in the technician’s ear. The earbud is wirelessly connected to the small transceiver on McKenna’s belt. His voice activates a ‘push’-to-talk connection to his controller in the Scottish technical support center. The word push is in quotes because it is his voice that effects the push, leaving McKenna’s hands entirely free.

In the Edinburgh-based command center, David Sanderson, an experienced engineer, maximizes the image from one of a half-dozen sub-screens that compete for his attention. Each screen shows live pictures from his team of technicians with data about their location and degree of job completion.

Sanderson taps the screen again and, 400 miles away in Bristol, a tiny camera on McKenna’s smart glasses zooms in on the generator specification plate. Sanderson peers intently at the screen:

“I see a code on the side panel. I’ve highlighted it for you. Can you scan it? I can then pull the circuit files for you”.

Seemingly in mid-air, a red circle appears around a barcode away to McKenna’s right. The heads-up display in McKenna’s glasses maintains a fix on the code even though he moves his head. He leans across and uses the camera to scan the code, which is instantaneously transmitted back to Edinburgh where the circuit plans are uploaded from the database. Sanderson extracts the relevant section before speaking again to McKenna.

“Jim, I’m initiating the synchronization, you should have it in a few seconds.”

The 3G transceiver on Jim’s belt receives the information and immediately routes it to his smart glasses via Bluetooth. As Jim looks at the damaged circuitry, the heads-up display begins to superimpose the circuit diagram over the actual circuits, adjusting for size. He spends a few minutes comparing the damaged circuits with the schematic images. He calls for more backup.

“Dave, the problem is definitely in this sector of the step-down circuit,” McKenna points to a series of circuit boards, “is there a suggested workaround in the troubleshooting file?”

Within minutes the heads-up display starts guiding McKenna through a series of measures that isolates and bypasses the damaged circuits. Within 20 minutes, McKenna successfully reboots the system – power is restored.

Five years ago, very little of the above could have been done as efficiently and intuitively. Field service engineers needed substantial experience to tackle complex tasks – they also had to carry heavy, often ruggedized PCs and a whole series of manuals on CD-ROMs. Technical backup, where available, was a cellular voice call.

Liverpool Street, London, May 1, 2008, 2:32 pm

Joanne King, an equity analyst, is meeting a buy-side client. As they settle into the soft leather chairs of the meeting room, she slides a flexible plastic sheet across the table. The sheet is printed with electronic ink. The latest marketing pack was downloaded to her mobile terminal on the way over in the taxi. She taps the screen of her smartphone and the slide set appears on the sheet. As Joanne and her client discuss the vagaries of the stock market, they are able to use virtual tabs to flip between ‘pages’ within the pack. When the client requests more information on the balance sheet of one of companies they’re discussing, Joanne is able to pull down the necessary information, adding it to the slide set.

Partway through the discussion, Joanne hears a subtle tone in her ear indicating an urgent communication request from her personal digital assistant. She apologizes to the client before initiating the communication path. “Wildfire, what’s the problem?” she knows that Wildfire will only override her no-interrupt rule if an issue requires immediate attention.

“An air traffic control strike in Paris has disrupted all flights. Your 6 pm Brussels flight is showing a two-hour delay and may be canceled. The best alternative is to take the Eurostar train. Services leave at 16:30 and 18:30.”

After a moment’s thought, Joanne comes to a decision: “Book the 16:30, please.” Conscious of the topics still to cover in her meeting, she adds, “Can you also have a taxi waiting when I am through here?”

Wildfire confirms the instructions and drops back into meeting mode. Joanne apologizes to the client and resumes her meeting. Meanwhile, Joanne’s software agent communicates with various travel services, canceling her flight reservation and booking the rail service.

Having learned from Joanne’s prior behavior, the agent books a First Class seat in a carriage toward the front of the train. The agent also communicates with a taxi firm – a car will be waiting when her meeting is completed. The agent is authorized to spend money within predefined limits. Simultaneously, the agent modifies Joanne’s expense report and calendar.

Joanne’s dinner date with friends in Brussels will be hard to keep given the change in travel plans. The agent negotiates with the diaries of her three dinner guests and the reservation computer at their chosen restaurant. A new reservation is agreed and four diaries are updated accordingly.

At the conclusion of her meeting, Joanne leaves the slide set contained in the pre-punched flexible display. Her client will be able to store it in standard folders and refer to it at leisure. Solar cells ensure that there is enough power to display the material without having to worry about battery charge.

As she heads for the taxi, Joanne’s location-aware PDA recognizes she is in motion and, therefore, ready to communicate. “Joanne, you have 2 voice messages, 23 business e-mails and 12 personal e-mails. How would you like me to handle them?” Joanne chooses to listen and respond to a voicemail on the short taxi ride to Waterloo, deferring the e-mails for the train.

Once in her seat on the Eurostar train, Joanne unfolds a screen and keyboard that work alongside her 3G smartphone. Bluetooth provides the link between the smartphone, screen and keyboard. The Light Emitting Polymer screen is extremely lightweight and flexible, yet delivers high contrast and color resolution. Power consumption is low.

Joanne spends an hour responding to the e-mails before kicking off her shoes and taking out an e-book to settle down to listen to some music. She is particularly looking forward to a new album she bought on the way to the station. A song she was unfamiliar with came over the radio in the taxi – loving it, but not knowing what it was, Joanne recorded a quick burst. Vodafone, her service provider, was able to identify the music and offered to sell her the single or album. In anticipation of her long train ride, she chose the album. Leaning back in her seat, she lets the cool beats ease her to Brussels.

In 2003, one-on-one presentations were either made from a PC screen or delivered on regular paper. Meeting interruptions were either obtrusive or impossible, and changing travel reservations on the fly typically required several people – often with intervention by the traveler herself. Meanwhile, mobile e-mail was possible but only on large-screen PCs, compromised by size, weight and power consumption, or devices with screens and keyboards too small for anything other than limited responses.

Hyde Park – May 1, 2:18 pm

Mike Lee is on his way home from high school. He flips his skateboard down three steps and dives for cover in the bushes, the sound of gunfire ringing in his ears. Peering through the leaves, he holds a small flat panel console in front of him. He scans through 120 degrees, concentrating on the screen. The intense rhythms of electro-house are now the loudest sounds he hears, but there is also the distant rap of gunfire. On the screen, he sees the surrounding park, but in addition, the occasional outlandish figure appears, flitting between hiding places among the trees. “Josh! Where are you?” Mike demands in an urgent whisper.

“I’m by the lake dude. Surrounded. Can you get down here? I’m running out of ammo.”

Mike swings around, looking toward the lake through his device. He sees Josh’s position highlighted on the screen. He turns back, takes a deep breath and starts jabbing buttons on his device. Explosions and smoke fill the screen. Then running to the path, he jumps back on his skateboard and carves down the hill to the lake, pitching into the shrubbery next to his buddy Josh. They proceed to engage the advancing enemy in a frenzy of laser grenades, gunfire and whoops of delight.

After a few minutes, they both hear the words they have been waiting for, “Well done men, you have completed Level 12. Hit the download button to move on to the next level.”

Mobile gaming, even as recently as 2003, offered a relatively poor user experience. Simple Java games were the norm. Games now not only involve online buddies but they are also immersed into the surrounding environment, massively enhancing the experience. 

3G has come a long way from its ignominious start. However, the real catalyst that has made it a life-changing technology has been the incredible range of diverse technologies that have emerged to support the growth in wireless voice and data applications.

 Cast List:

3G smartphones – Nokia, Motorola

Bluetooth earbuds – Sound ID

Heads-up display – Microvision

Voice-driven push-to-talk – Sonim

Voice control – Advanced Recognition Technologies

Personal digital assistant – Wildfire

Electronic ink pad – E Ink, Philips Electronics

Music capture – Shazam Entertainment

Foldable Light Emitting Polymer Display – Technology from Cambridge Display Technology

Augmented reality game console – Nokia N-Gage 4

Intelligent mobile agents – Hewlett Packard

Geo-location technology – Openwave

Where are these companies in 2023?

My original cast of technology characters has seen mixed fortunes, some are still around but with different owners while others have disappeared altogether. Few are still going in their original business niche:

Nokia and Motorola are brands that are still making mobile devices, but in different guises than in 2003.

I don’t know what became of Sound ID. There is an app called SoundID created by Sonar Works, but it is different and unrelated to the Sound ID identified in the article. But Bluetooth True Wireless earbuds are now a huge market.

Microvision is still in business but has shifted its focus to LiDAR in the automotive space.

Sonim is still in business and still making ruggedized devices, including push-to-talk devices for the safety and security sectors.

Advanced Recognition Technologies was acquired by Scansoft in 2005.

Wildfire was an innovative voice-controlled personal assistant that was acquired by the operator Orange in 2000. But Orange killed the service in 2005.

E-Ink still exists, although Philips parted ways with it in 2005.

Shazam still exists but was acquired by Apple in 2018. When it started in 2002, you had to dial a short number and hold your phone to the sound source. Users would then receive an SMS with the song title and artist.

Cambridge Display Technology is still around. It was floated on Nasdaq in 2004 and acquired by Sumitomo Chemical in 2007.

Hewlett Packard is clearly still around. However, it doesn’t make intelligent software agents. But then again, neither does anyone else, at least not in the way portrayed in the article.

Openwave no longer exists, although many of its businesses have been absorbed into other entities.

Term of Use and Privacy Policy

Counterpoint Technology Market Research Limited

Registration

In order to access Counterpoint Technology Market Research Limited (Company or We hereafter) Web sites, you may be asked to complete a registration form. You are required to provide contact information which is used to enhance the user experience and determine whether you are a paid subscriber or not.
Personal Information When you register on we ask you for personal information. We use this information to provide you with the best advice and highest-quality service as well as with offers that we think are relevant to you. We may also contact you regarding a Web site problem or other customer service-related issues. We do not sell, share or rent personal information about you collected on Company Web sites.

How to unsubscribe and Termination

You may request to terminate your account or unsubscribe to any email subscriptions or mailing lists at any time. In accessing and using this Website, User agrees to comply with all applicable laws and agrees not to take any action that would compromise the security or viability of this Website. The Company may terminate User’s access to this Website at any time for any reason. The terms hereunder regarding Accuracy of Information and Third Party Rights shall survive termination.

Website Content and Copyright

This Website is the property of Counterpoint and is protected by international copyright law and conventions. We grant users the right to access and use the Website, so long as such use is for internal information purposes, and User does not alter, copy, disseminate, redistribute or republish any content or feature of this Website. User acknowledges that access to and use of this Website is subject to these TERMS OF USE and any expanded access or use must be approved in writing by the Company.
– Passwords are for user’s individual use
– Passwords may not be shared with others
– Users may not store documents in shared folders.
– Users may not redistribute documents to non-users unless otherwise stated in their contract terms.

Changes or Updates to the Website

The Company reserves the right to change, update or discontinue any aspect of this Website at any time without notice. Your continued use of the Website after any such change constitutes your agreement to these TERMS OF USE, as modified.
Accuracy of Information: While the information contained on this Website has been obtained from sources believed to be reliable, We disclaims all warranties as to the accuracy, completeness or adequacy of such information. User assumes sole responsibility for the use it makes of this Website to achieve his/her intended results.

Third Party Links: This Website may contain links to other third party websites, which are provided as additional resources for the convenience of Users. We do not endorse, sponsor or accept any responsibility for these third party websites, User agrees to direct any concerns relating to these third party websites to the relevant website administrator.

Cookies and Tracking

We may monitor how you use our Web sites. It is used solely for purposes of enabling us to provide you with a personalized Web site experience.
This data may also be used in the aggregate, to identify appropriate product offerings and subscription plans.
Cookies may be set in order to identify you and determine your access privileges. Cookies are simply identifiers. You have the ability to delete cookie files from your hard disk drive.