Friday 23 September 2016

From-3GPP : Heading towards 5G

An article for the Eurescom message, by Balazs Bertenyi, 3GPP TSG SA Chairman
3GPP standards have played a pivotal role in the success of LTE, making it the fastest growing cellular technology in history. Never before has a new radio technology made it to the market so quickly and widely after the finalization of the first version of the standards (3GPP Release 8 was finalized in December 2008). 

For the first time in history LTE has brought the entire mobile industry to a single technology footprint resulting in unprecedented economies of scale. After the initial LTE Release, work in 3GPP has been centered on the following strategic areas:
  1. Enhancing LTE radio standards to further improve capacity and performance;
  2. Enhancing system standards to make LTE and EPC available to new business segments;
  3. Introducing improvements for system robustness, especially for handing exponential smart phone traffic growth.
This article focuses on the latter two aspects, and outlines the potential standards path towards the 5G era.
Here is a snapshot of the main features and their timelines 3GPP has been working on:
3gpp image01 530px

Addressing new business segments

The converged footprint of LTE has made it an attractive technology baseline for several segments that had traditionally operated outside the commercial cellular domain. 
3gpp image05
In particular, the critical communication and public safety community has turned to LTE for developing their next generation broadband data system. 3GPP has embraced this initiative and committed to deliver the necessary standards enhancements to make the LTE/EPC system suitable for this purpose. Work has started in Release-12, and standards for the first batch of features will be completed by December this year.
These features include enhancements for direct device-to-device (D2D) communications as well as group communication services, both of which are essential to achieve TETRA/P.25-like functionality for broadband data.
  • D2D allows devices in close proximity to communicate directly with each other, thereby enabling authorities to communicate out-of-network-coverage or during network outages (e.g. in case of a natural disaster). There are also commercial benefits of D2D, with new applications building on the physical proximity of users being trialed by operators.
  • Group communications allow authorities to create and dissolve groups on demand with resource efficient communications (e.g. multicast) within the group.
Work on further functions for critical communications will continue in Release 13, for example, in the area of enabling relays (relaying between in-coverage and out-of-coverage devices) and push-to-talk type functionality.
Machine-type Communications (e.g. smart meters) have been using the cellular networks for some time now, primarily over GSM and GPRS. Whilst 2G technologies provide cheap means for basic connectivity and data rate, there is growing demand for a more versatile M2M platform. The challenge in the industry (and in 3GPP standards) is the lack of convergence across the M2M providers with respect to traffic patterns and system needs. Hence, a holistic approach to an LTE-based M2M architecture design has not (yet) materialized. This has led to  3GPP standards work being focused on several different, smaller, enablers so far:
  • Radio optimizations to allow for lower cost LTE chipsets;
  • System level awareness of M2M devices, i.e. the system can identify such devices and apply selective handling as per operator configuration (e.g. selective disabling in case of overload);
  • Device power consumption optimizations;
  • Mechanisms for optimized handling of small amounts of data.   

System capacity and robustness

The exponential growth of smartphones and the traffic they generate have become a major challenge of the industry. Network investments are not able to keep pace with the growing data demand. A big portion of the work 3GPP has been undertaking, in recent years, was driven by alleviating this challenge.

One key element to decrease data load of cellular networks is to offload traffic to WiFi, especially bulk traffic that does not require any special handling for service delivery or charging. The 3GPP-standard mechanism for this is built around a new functional element, the Access Network Discovery and Selection Function (ANDSF). The ANDSF conveys policies to the devices facilitating selection of either cellular or WiFi access for different kinds of traffic (e.g. based on IP flow designation):
3gpp image02
Release-12 is enabling seamless mobility between WiFi and cellular accesses with multiple connections, see figure below (cf 3GPP TS 23.402):
3gpp image03

Release-12 will also develop an even tighter integration of cellular and WiFi access through having the LTE RAN convey parameters and rules for offloading:
3gpp image04
All in all, as traffic growth continues operators will need more and more innovative functionality in their network to cope with it. Unlicensed spectrum (via WiFi or with other technologies) will continue to play an important role in this quest.
Security certification of network elements is becoming an increasingly important issue in many regions. To ease deployments and avoid fragmentation it is critical for operators, and vendors alike, that certification of network products is harmonized as much as possible. To this end 3GPP initiated a new endeavor together with GSMA to converge security assurance of network elements around a single methodology. Release-12 specifications outline a method whereby 3GPP generates Security Assurance Specifications (SAS) for each functional element. GSMA takes responsibility for accreditation using the SAS documents, and also manages potential dispute processes.    

Moving towards the 5G era

The term ‘5G’ is rapidly coming into the limelight. Much of this early hype phenomenon is directly attributable to the (well-founded) bandwidth-thirst of the industry. Examining the potential technology trends behind this hype one can find the following likely pillars going forward: 

Extensive capacity need in dense areas; 

LTE already has a Small Cell concept defined in Release-12, that is optimized as much as technologically possible for the current bands. A potential enhancement being discussed for Release 13 is to make LTE suitable for unlicensed spectrum bands. Whilst the exact nature and focus of this work is still under discussion, it is clear that such an enhancement would provide further means to deal with the traffic load.  
To further boost the capacity of dense areas it is expected that new licensed spectrum bands (in particular in higher frequency bands up to ~1 GHz carrier bandwidth) will also be needed. Initial research shows that such high frequency bands might require a new radio waveform, a new radio technology. It is yet unclear when/whether/where the standardization of such a new radio will be undertaken. Nonetheless, the earliest such work is expected to be potentially initiated in 3GPP is around the 2015-2016 timeframe.

Ubiquitous coverage;

For the currently available bands LTE is very close to reaching the technologically possible efficiency limits. Hence, it is expected that LTE will remain as the baseline technology for wide area broadband coverage also in the 5G era. 3GPP will continue working on enhancing LTE not only from the radio perspective, but also from service delivery perspective (e.g. making it more suitable for M2M). Consequently, interworking with LTE will remain to be a critical factor. 

Ever increasing cost pressure;

Traditional telecommunications equipment has tied hardware and software close together. Advancements in hardware technology, as well as success of virtualization in the IT industry have brought the notion of virtualization into the mobile network space. Separation of user plane and control plane have long since been a key design element of mobile networks, making most of the mobile network architecture an ideal candidate for virtualized deployments. Industry activities and inception of specialized industry interest groups (e.g. ETSI NFV) all point towards the feasibility of this approach bringing the following main benefits:
  • Enhancing the level of automation
  • Decoupling software functions from the resources
  • Allowing faster service introduction
  • Providing service and network performance analysis and optimization   
3GPP standards work on virtualization is about to be started, in  Release 12. Initial focus will be turned towards O&M aspects, work on core network and radio architecture is expected to follow later.
History has shown that the mobile industry undergoes a major technology shift roughly once every 10 years. There are vast arrays of technology developments on the horizon and that demand for these is greater than ever. The global footprint, and the success, of 3GPP standards will continue to put pressure on the Project to get new specifications out in a timely manner. To achieve this, intensified industry collaboration becomes more important than ever before. As we add 5G discussions to the mix, we will have plenty to keep us busy in 3GPP for the foreseeable future.

Qualcomm unified air interfcae to cater all for 5G

mmwave for the 5G mobile network- a detailed picture

Hetnet technology and architecture evolution has got a notion of Multi-X environment

Saturday 17 September 2016

WiFi is much required to offload many of cellular services those can lead to expansion of customer base.

Sponsorship & Rights to publish
Be the sponsor get rights to publish it with poc demo, and also ........
Platinum- Logo on website, advertisement on xgnlab mag for all 2018 issue.
Gold -  Logo on website, advertisement on next issue of "xgnlab" mag.
Silver - Logo on website.
or "" Wi-Fi is not as ubiquitous as cellular but finding Wi-Fi is like finding a coffee shop.  Though there are initiatives to make it more ubiquitous as it has potential to cater largest share of internet data flowing over wireless access networks.

Before going into the main theme of cellular (note, though i want to generalize here for cellular but i am actually specific to LTE/EPC network while being talking for cellular) and Wi-Fi convergence, let’s go through some of the points from the cisco VNI report for 2015-2020.

Global mobile data traffic grew 74 percent in 2015. Global mobile data traffic reached 3.7 exabytes per month at the end of 2015, up from 2.1 exabytes per month at the end of 2014.

Mobile offload exceeded cellular traffic for the first time in 2015. Fifty-one percent of total mobile data traffic was offloaded onto the fixed network through Wi-Fi or femtocell in 2015. In total, 3.9 exabytes of mobile data traffic were offloaded onto the fixed network each month.
And the forecast …..

Mobile data traffic will reach the following milestones within the next 5 years:
●   Monthly global mobile data traffic will be 30.6 exabytes by 2020.
●   The number of mobile-connected devices per capita will reach 1.5 by 2020.
●   The average global mobile connection speed will surpass 3 Mbps by 2017.
●   The total number of smartphones (including phablets) will be nearly 50 percent of global devices and connections by 2020.
●   Because of increased usage on smartphones, smartphones will cross four-fifths of mobile data traffic by 2020.
●   Monthly mobile tablet traffic will surpass 2.0 exabytes per month by 2020.
●   4G connections will have the highest share (40.5 percent) of total mobile connections by 2020.
●   4G traffic will be more than half of the total mobile traffic by 2016.
●   More traffic was offloaded from cellular networks (on to Wi-Fi) than remained on cellular networks in 2015.
●   Three-fourths (75 percent) of the world’s mobile data traffic will be video by 2020.

3GPP is continuously evolving its Wi-Fi integration architecture, more attentively since 2008 with rel-8 along with LTE and recent advancement in rel-12 and in rel-13, where it going to be more like RAN scenario as the radio specific standards will incorporate Wi-Fi network related parameters.  3GPP has created the dichotomy of Wi-Fi access as trusted and not-trusted and its solution evolution has been carried around this dichotomy so far. Though Wi-Fi has developed to the advancement to have it as secure access and there are mechanism, like HOTSPOT 2.0, defined for making Wi-Fi as trusted network, But Wi-Fi is more like a mushroom network and in perception is always untrusted.

Assuming Wi-Fi ‘untrusted’ is also the need for providing ubiquitous solution as that makes the solution universal to accommodate any kind of Wi-Fi access, that’s where the ePDG has become synonymous to Wi-Fi offload solution. ePDG provide IPsec  tunnel with the end device over the underlying Wi-Fi access network and authenticate devices (with AAA server) and provide access to EPC core. But Rel 11 onward 3GPP is more inclined to refine it trusted access solution that is SaMOG/TWAG based with the assumption of the secure radio access due to HOTSPOT 2.0 compliance Wi-Fi APs.

All the advancements for the need of Wi-Fi offload, like HOTSPOT 2.0, ANDSF, RAN specific incorporation's in rel-12 & rel-13 etc., are of necessity but could not be presumed as of being 'must'. The transformation is sequel not a leap forward. We have to think about a universal approach cater to all, at least for few more years - the ecosystem need to emerge in time.

What's the crux.....

3GPP solution for Wi-Fi integration with cellular has been under the grip of what I call a inter-RAT cellular paradigm like hand off from one RAT to other RAT. Which is not feasible at all between these two technology as they are of different nature and from different origin. we always think to integrate the two non-isotopic  networks with complete hand offs i.e. if a device move from one to other it should completely be with the one, more like 3GPP inter-RAT hand-offs paradigm.  But those two networks are of their own sovereignty and completely unrelated and with such paradigm the required control or seamless transfer is not feasible at all.  Therefore 3GPP solutions find only point of convergence, at EPC/PGW (rel-8 onwards), as for the shake of creating a IPCAN session and retain the IP address and so the mobility. Solution introduced interfaces with AAA server for network authentication and for retaining the context, QoS, etc.

The solution are seemingly more inclined to IETF kind of approaches for access connectivity, i.e. connect, authenticate, solicit and access with mobility through anchor router like HA or here PGW. These integration approaches are superimposed with conceptual perception of hand off which is deliberately made feasible but not there by design.

That's where term 'Wi-Fi offload' does not bring the fidelity, as there is no feasibility by design. The most buzzing term in these solutions is ‘seamlessness’ and also the theme of differentiation and innovation for the solutions mustering up for Wi-Fi offload.

What is of interest to us...

Wi-Fi is not there for hand off or to say in popular term offload. It could be leveraged as associated network. An associated data path to LTE network by converging at RAN or IP-RAN through a Local PGW and keeping the common EPC core. This convergence is possible if S1AP  will be terminated at this Local PGW and MME is kept transparent to converged Local network. EPC/PGW while creating the bearer, based on IPCAN session (contains the policy), will provide the information, based on policy, which will help Local PGW to distinguish and forward the data flow.

Wi-Fi and cellular could be converged at radio access level and Wi-Fi can be used an associated data channel for the mobile access. This feasibility is well accepted now and companies like Qualcomm have more obsessed approches. Qualcomm is pursuing the aggregation at radio link level with a point of convergence at PDCP layer (

On the convergence of two at RAN, the small cell forum is also putting its attention through research and industry surveys. Its recent whitepaper, feb 2016, titled "Industry perspectives, trusted WLAN architectures and deployment considerations for integrated Small-Cell Wi-Fi (ISW) networks" SCF states at section 2.0 Integrated small cell Wi-Fi (ISW) networks.....
"other interesting alternatives are possible, namely integration in the SC-APs (i.e. RAN-based integration) and/or in SC-gateways (i.e. GW-based integration).
Here, the Integration function resides at the edge, possibly in an integrated ISW-AP. RAN-based integration of licensed/unlicensed access is now being addressed by 3GPP release 13, including approaches for RAN based integration of Wi-Fi and LTE.
Finally, architectures that integrate Wi-Fi and SCs at the gateway level are possible. For example, the SC-GW (i.e. H(e)NB-GW) as well as Wi-Fi GW (i.e. ePDG and/or TWAG/TWAP) may be realized together, along with associated integration functions. At the time of writing, these architectures are still in consideration and development."

So that's the future.....

This could be done at network level, where you need not to go all the way to PGW at EPC but having a local PGW at RAN/IP-RAN network. The advantage it will bring is the common EPC for both cellular and Wi-Fi networks i.e. no need for the transfer of PDN connectivity context like an inter-RAT scenarios. Only thing is that the solution is not for any kind of Wi-Fi device but it is for cellular device equipped with Wi-Fi as we need cellular for all the service control level functionality.

Service delivery is well controlled through the 3GPP PCC architecture that define PCRF, PCEF (PGW) and Application server (AS) level interaction and coordination also referred to as IPCAN session.  In EPC we need bearer to carry the traffic for specific service. This bearer traffic is delivered to RAN to reach to end devices over air interfaces or radio channels. EPC provide all the necessary information as QoS parameter for specific bearer to RAN for required radio channel capacity. The interface between EPC and RAN is S1 (S1-AP for control path and S1-U for data path).

Instead of having direct interface between eNB and EPC if we keep a Local PGW node (L-PGW) to interface with EPC and provide convergence of Wi-Fi network at this L-PGW .  A replica of this PCC architecture can be implemented at this L-PGW level which will decide to deliver the service at Wi-Fi of Cellular radio. This will definitely need the modification on RAN and EPC interfaces like s1AP and NAS etc. as L-PGW has to coordinate with MME. 

There is nothing to scratch to get go, the provisions of such nature framework and already there with 3GPP, like for LIMONET ( We can leverage this for convergence of the two networks at local network level with a common core.

The Standards for LIPA specific work at 3GPP (23.859) provide the data path connectivity compressed till local network, although the signaling for that remain intact like a normal PDN connectivity. This could be taken in principle for the convergence of two networks at local network level.

We believe strongly that Service provider can create yielding business case around a associated Wi-Fi network. The convergence at RAN and at network layer level would be a pure software solution with existing infrastructure and can be a ready to move solution. Other specific approaches will be requiring necessary ecosystem around for their success.

We are actively seeking the support and sponsors to extend our POC work. We need support from service provider and OEMs for widen-out our POC development. Please feel free to contact us for detail and discussion.

Note : We are strongly pursuing on our believe on next gen network with a central theme of "Homogeneous connectivity through heterogeneous networks"  the solution for Wi-Fi convergence to cellular is inclined to this theme. We believe technology like MEC is going to give a master boost to our center theme.

continuing on it........

Thursday 8 September 2016

5G-Application defined networking- a magical paradigm

How much latency is enough to support the most stringent real time application? What is the least interruption time to maintain the service integrity for most consistent application during transfer of end user from one point of connect to other point of connect, to access the serving network, or during the fault recovery from transient faults with in the network itself.
These could be some of the citation for the requirements for the highly available network. So what I was taking here as magical networks… and that is what I want to discuss as magical networks paradigm i.e. networks which are so highly available that the application is completely transparent of their internal behaviour for environmental changes like, mobility, fault & recovery.
We know that even currently available networks provide very high availability and the consistency of the service is maintained during hand off, connection failure & recovery, reselection of access points etc. There are quite a procedural executions to maintain such availability with fair transparency to the application.
The magical networks paradigm is to make such lengthy procedural execution to almost disappear or reduce them to the negligibility of computation.
And this is very much feasible if your latency of access is coming down and down and also with some of architectural evolution. The core of the network, where the context of the application requirement for the delivery of services is retained, is dissolving to totally flat architecture with all IP paradigm. There is no rigid or strict architectural requirements in terms of core network architecture as the NFV and cloud is blurring the boundaries of computational requirement of network specific functionalities.
That means networks are such that the end users are able to access service with minimum set up time due to less procedural computation and very low latency. The mobility is such that the procedural executions are negligible to provide the hands off but rather like a fresh connection from one point of connect to other point of connect so fast and consistent that the magic has occurred below the line completely transparent for the things above the line.
This magical paradigm could be aligned with Application defined networking which is about a complete separation of control and data plane and control plane is more defined from application plane. This also see the network access and core completely decoupled. This is rather the arrangement which can give a realization of the magical fidelity to applications.
5G is coming out with certain high expectation like very low latency perhaps of one millisecond only with radio advancement in radio access technology, convergence at access, network function virtualization at radio access network and for core network functions, Network slicing for effective network utilization and off-course software defined networking.
These are some of the advancement to felicitate a flat architecture so flat that only a ‘connect’ is required to change the point of access i.e. no hand offs or handovers. Also so highly available with fault detection and recovery like a 100 percent resilient.
Probably this will be the next optimization objectives for 5G initiatives.


Edward Lorenz, 20th century world renowned mathematician, produced an intriguing mathematical paper, which became the foundation stone of chaos theory, which essentially deals with the gigantic effect of any tiny events happening at one place and time, influencing others happening at different places, even far away from them and even in different time. Lorenz were working for the mathematical formulation for the prediction of weather conditions when he developed his idea. Lorenz could not find the right title for his paper to express his concept and has given a metaphorical name to his paper stating “a butterfly flutter its wings at brazil and a tornado is formed at texas”. This concept had got disseminated with the notion of "butterfly effect" and was tough to be conceived, as in the way that in the happening of any event everything matters at various places and at various time in various intensity and impact.
Alluding (taking help from wiki) on this not easily conceivable concept that had not remained free from criticism and sarcastic comments, like one of such was that if theory is accepted so there should be a remarkable change in the course of weather track by a flap of sea gull’s wings. This putted the whole thing into a controversial theme, but as Wikipedia points that the most recent proves on this falls in favour to seagull and no doubts chaos theory is a well-established mathematical discipline.
Coming to the point the IOT is all about the butterflies fluttering everywhere with the notion of everything connected and yielding a tornado - the Bigdata.
As we are moving towards the development of next generation communication systems capable of connecting millions of devices per unit of geo region and having a ubiquitous connectivity across the globe. These systems which are going to generate data, attributed to various applications running on them. This data is of value for creating tremendous business opportunities for us.
No doubt the upcoming technological advancement on connectivity, networking and computing has been of capability to cater all the requirement for such IOT need, and even if not available handy today the new advancement will achieve it.  
But nothing less to say here that the success of all IOT stories are on the anvils of big data, cause that’s where application and business cases would be shaped. This is actually where to deal with the tornado, tornado could be of any form, small or big, but most probably as much as bigger the tornado is the more and more impact the IOT will bring.
As people are talking on new industrial revolution and coining it with a number sequence "industry 4.0". Hope the chaos theory will be helping there.

Next Generation Mobile Networks Evolution with Convergence, Cloud and SDN

TAGS: NGN, HetNet, Cloud, SDN, smallcell, WiFi, offload, mobile core, mobile broadband, wireless networks
Humongous flood of smart devices and next generation applications and also the reach of networking to small piece of appliances the demand of the data traffic is being increased multi-fold on service networks. Networks are no longer remained and will remain an entity of just to provide connectivity but also to understand the whole eco system and have to evolve at every level.
These eco system requirements are coming up with new approaches, generating new challenges and enigmas. To cater these eco system requirements the concept of heterogeneous network at access and unified core network could be evaluated. Where there will be convergence at the access for heterogeneous radios and unified core for the unification of policy, provisioning and mobility.
As devices are already there or coming up with all the new interfaces along with 2G/3G or LTE and WiFi with hotspot2.0 -- Passpoint etc. These technologies are being utilized to provide heterogeneous network connectivity and to be managed to cater the various race and cast of traffic that the devices generate, depending on the services and connectivity.
This will not only provide the capacity enhancement but also efficiency of individual access network connectivity. It will result in total convergence at radio access and there would be selection mechanism to decide the network at access for specific service and related traffic forwarding to and from core network which is providing public network connectivity.
The complexities which will arise on such network evolution would be for the demand for flexibility, elasticity, unification, ubiquitous and seamlessness. The feasibility of addressing such requirements lies in the paradigm shift of complete separation of control and data plane and in such endeavour cloud and SDN are going to provide encouraging enabler. In such endeavour the control plane would fall to cloud and data plane would be on SDN.
Coming to more specifics on network architecture, Access stratum and non-access stratum (core) paradigm is going to hold with such enhancements. As access stratum would be more about access selection and capability associations etc. whereas core would be more on service delivery, policy, mobility and network connectivity.
on mobility front also shift will happen from hierarchical mobility to network mobility and hierarchical mobility will stand with the best limited reach at access stratum. That means to say that IETF protocols may dominates in the whole mobility aspects of the systems.
While looking at the upcoming thrust and compulsion for evolution in network architecture, a solution approach for the same could be contemplated. This is briefly defined as convergence at access stratum, and non-access stratum with unified core where an  ‘Access Controller Agent’ will help to select the access network and also will construct the core by finding the required association of cloud and SDN. The figure below depicts the emerging mechanism.
 The separation of control plane and data plane in next generation mobile core network architecture has already happened to some extent in EPS or SAE. Thanks to All IP paradigm in mobile networks to make this happing. Coming of cloud and SDN is going to corroborate this paradigm shift to further extent.
As depicted in the Figure the access will be approaching to unified core through globally unique network service access identifier (NSAPI), which will be helping to find the right Access controller agent to find association between cloud for control plane and SDN realm for data plane. Agent is a variant of MME as most of the MME functionality will move to cloud it would be there with global attributes and domain specific scope to land to right cloud and enter into right SDN realm or domain. The networks of agents would provide the unification of policy and provisioning and mobility.
This paradigm shift which I refer to as “homogenous connectivity of heterogeneous networks” could be envisioned to transform the next generation network architecture to suit the emerging eco system.

Self-Organizing Networks (SON): Managing Disruption -- Towards a bigger scope

Keywords: LTE, small cell, SON, ARPU, 4G, NGN, HETNET
The complexity of ‘operations & management’ and urge for ‘efficiency & optimization’, of next generation networks are compelling factors for the imperative need of SON technologies in next generation wireless systems. As the SON is imminent in 4G networks but it still is in nascent stage of implementation and standardization.
The adamant threat of data outburst and related challenges over capacity enhancements with an eye on ARPU is like a nightmare as of today for the telecom operators. And the efforts are being there to come up with technologies and solutions, which combat the demands and having strong business cases and models. This is altogether, along with radical scalability and flexibility to cope with the upcoming situation and to move forward.
Just to gaze on the situation emphasized through the given table below (Src: maravedis rethink), provides tussle between capacity requirements and technological solutions.
New technology
% required capacity increase Y1
% required TCO decrease Y1
% required capacity increase Y4
% required TCO decrease Y4
LTE upgrade
Wi-Fi offload
Public access small cells
re-farmed spectrum and carrier aggregation
Src: Maravedis rethink
Small-cells, Wi-Fi-offload, distributed antenna systems (DAS), centralized RAN (C-RAN) and LTE-Advances are some of the technological solutions in the list of next generation heterogeneous networks (HETNET) evolution at access and Fiber optical, microwave, millimeter wave, cable and even access radio at backhaul network with hetnet approaches..
On the other side of story, bigger bite of the Value additions are being taken by OTT services who are becoming arch rival for TSPs and pulling out their business and revenue terrain. On the top of that there is hard competition on customer retention rather on customer acquisition for the operators. This is due to the emerging demand of customer experience and related Quality of services. These are the challenges which directly impact the operational efficiency of network, in terms of cost and service delivery.
The growth in wireless technology is rapid and fast, specifically in area of spectral efficiency and data rate. But the actual realization of the network’s build to provide a cost effective solution is contingent upon the existing disruption, in terms of the evolving solutions and also considering the other factors like --existing and required infrastructures, backhauls – front-haul requirements (technologies like fiber and microwaves), and the interoperability .
This disruption is taking its own time to settle the things in order, as what could be foresighted about the upcoming network builds based on industry research could be listed as below.
  • Efficient resource distribution – Towards cloud and virtualization, emerging concepts like C-RAN, where all processing is virtualized in the cloud and stripped-down radio/antenna units remain at the cell sites.
  • Small cells - Towards capacity improvements and densification of macro by combining with more and more antennas in the macro layer (Massive MIMO).
  • Wi-Fi offloads – Much sought and debated area with innovative approaches and also with contentions and competition between Wi-Fi and LTE evangelist for their dominance.
  • Back & Front Haul – Existing capability and upcoming technology & techniques, which provide flexibility and scalability.
  • New Spectrum - sometimes unconventional spectrum bands and carrier aggregation.
  • SDN & NFV – from adaptive networking tools to fully fledged SDN (software defined networking) and NFV (Network Functions Virtualization).
This disruption in terms of technological advance and innovation is deliberating the networks builds and also propelling the need for self-organization, which comprises automation on multiple levels like self-configuration, self-healing, self-optimization etc.

SON is going to be there at all level, i.e. at the system level and at network level. At system level it is going to be an ingredient of the system level architecture and have the visibility and control over the system and at network level it is going to take positions closed to analytics, managements and controls.
The solution for SON would be taking holistic approaches, though in current stage OEMs are giving SON as specifics or in piecemeal, and solution for the network levels are not standardized rather proprietary.
SON is pervading across, to address the challenges of upcoming eco systems and related network builds. SON is taking up prominent stage to provide tools to manage what seems to be chaos, “otherwise”, of interrelated multiplicity of demands, capacity, cost, technology, management, operations, efficiency and optimization.