International Journal of Advance Computational Engineering and Networking (IJACEN)
Follow Us On :
current issues
Volume-9,Issue-1  ( Jan, 2021 )
Past issues
  1. Volume-8,Issue-12  ( Dec, 2020 )
  2. Volume-8,Issue-11  ( Nov, 2020 )
  3. Volume-8,Issue-10  ( Oct, 2020 )
  4. Volume-8,Issue-9  ( Sep, 2020 )
  5. Volume-8,Issue-8  ( Aug, 2020 )
  6. Volume-8,Issue-7  ( Jul, 2020 )
  7. Volume-8,Issue-6  ( Jun, 2020 )
  8. Volume-8,Issue-5  ( May, 2020 )
  9. Volume-8,Issue-4  ( Apr, 2020 )
  10. Volume-8,Issue-3  ( Mar, 2020 )

Statistics report
Apr. 2021
Submitted Papers : 80
Accepted Papers : 10
Rejected Papers : 70
Acc. Perc : 12%
Issue Published : 96
Paper Published : 1284
No. of Authors : 3226
  Journal Paper

Paper Title :
Opportunistic Routing In Cognitive Radio Networks Using Reinforcement Learning

Author :Jitisha R. Patel, Sunita S. Barve

Article Citation :Jitisha R. Patel ,Sunita S. Barve , (2014 ) " Opportunistic Routing In Cognitive Radio Networks Using Reinforcement Learning " , International Journal of Advance Computational Engineering and Networking (IJACEN) , pp. 1-3, Volume-2,Issue-8

Abstract : Cognitive radio (CR) technology is rapidly developing these days due to its capability of adaptive learning and reconfiguration. Thus, using Cognitive Radio Networks (CRNs) spectrum efficiency can be increased by allowing the secondary users (SUs) to access the licensed band dynamically and opportunistically without interfering the primary users (PUs). Daniel H. and Ryan W. Thomas, define the CRNs in the context of machine learning as the network which improves its performance through experience gained over a period of time without complete information about the environment in which it operates. Thus, the dynamism and opportunism can be learnt by reinforcement learning, which is concerned with how software agents or learning agents ought to take actions in an environment so as to maximize some notion of cumulative reward. The paper proposes a routing scheme that uses Q-learning, which is the most widely used RL approach in wireless networks. In Q-learning, the learnt action value or Q-value, Q (state, event, action) is updated using the reward and is recorded. For each state-event pair, an appropriate action is rewarded and its Q-value is increased. Hence, the Q-value indicates the appropriateness of an action selection in a state-event pair. At any time instant, an action is chosen by the agent in such a way that it maximizes its Q-value. The reward corresponds to performance metric such as throughput.

Type : Research paper

Published : Volume-2,Issue-8


Copyright: © Institute of Research and Journals

| PDF |
Viewed - 53
| Published on 2014-08-01
IRAJ Other Journals
IJACEN updates
Paper Submission is open now for upcoming Issue.
The Conference World