Finding the optimum path for a robot for moving from start to the goal position through obstacles is still a challenging issue. Thi s paper presents a novel path planning method, named D - point trigonometric, based on Q - learning algorithm for dynamic and uncertain environments, in which all the obstacles and the target are moving. We define a new state, action and reward functions for t he Q - learning by which the agent can find the best action in every state to reach the goal in the most appropriate path. Moreover, the experiment s in Unity3D confirmed the high convergence speed, the high hit rate, as well as the low dependency on environmental parameters of the proposed method compared with an opponent approach. The planning has been considered as a challenging concern in video games , transportation systems , and mobile robots   . A s the most important path planning issues, w e can refer to the dynamics and the uncertainty of the environment, the smoothness and the length of the path, obstacle avoidance, and the computation al cost . In the last few decades, researchers have done numerous research efforts to present new approaches to solve them     . Generally, most of the path planning approaches are categorized to one of the following methods   : ( 1) Classical methods (a) Computational geometry (CG) (b) Probabilistic r oadmap (PRM) (c) Potential fields method (PFM) ( 2) Heuristic and meta heuristic methods (a) Soft computing (b) Hybrid algorithms Since the complexity and the execution time of CG methods were high , PRMs were proposed to red uce the search space using techniques like milestones  .