[1] Grefenstette J. J. and Ramsey C. L., "An approach to anytime learning", Proceedings of the Ninth International Workshop on Machine Learning (ML 1992), Aberdeen, Scotland, UK, July 1-3, pp. 189–195, 1992.
[2] Branke J., Kaußler T., Schmidt C., and Schmeck H., "A multi population approach to dynamic optimization problems". In Adaptive Computing in Design and Manufacturing, pp. 299–308, 2000.
[3] Branke J., "Evolutionary Optimization in Dynamic Environments", Universität Karlsruhe, pp. 1-174, 2000.
[4] Darwin C., “On the Origins of species by means of natural selection, or the preservation of favoured race in the struggle for life”, LONDON: JOHN MURRAY (1STEDITION), 1859.
[5] Jin Y., and Branke J., "Evolutionary optimization in uncertain environments: a survey", IEEE Transactions On Evolutionary Computation 9(3): 303-317, 2005.
[6] Whitley L.D. and Kauth J., “Genitor: A different genetic algorithm”, In Proceedings of the Rocky Mountain Conference on Artificial Intelligence, 118-130, 1988.
[7] Cobb H.G., “An investigation into the use of hyper-mutation as an adaptive operator in genetic algorithms having continuous, time dependent nonstationary environments”, Technical Report Aic-90-001, Naval Research Laboratory, Washington, USA, 1990.
[8] Grefenstette J.J., “Evolvability in dynamic fitness landscapes: A genetic algorithm approach”, In Proceedings of the 1999 IEEE Congress on Evolutionary Computation (Cec’1999), Vol. 3, IEEE Press, 2031-2038, 1999.
[9] Bak P. and Sneppen K., “Punctuated equilibrium and criticality in a simple model of evolution”, Physical review of letters, 71(24):4083-4086, 1993.
[10] Stephens C.R., Olmedo I.G., Vargas J.M., and Waelbroeck H., “Self-adaptation in evolving systems”, Artificial life, 4(2):183-201, 1998.
[11] Goldberg D.E. and Smith R.E., “Nonstationary function optimization using genetic algorithms with dominance and diploidy”, In J.J. Grefenstette (Ed.): Proceedings Of The 2nd International Conference On Genetic Algorithm, L. Erlbaum Associates, Hillsdale, Nj, USA, 59-68, 1987.
[12] Etaner-Uyar A.S. and Harmanci A.E., “A new population based adaptive domination change mechanism for diploid genetic algorithms in dynamic environments”, Soft Computing, 9(11):803–815, 2005.
[13] Ryan C., “Diploidy without dominance”, In J.T. Alander (Ed.), Proceedings of the 3rd Nordic Workshop on Genetic Algorithms and Their Applications, 63-70, 1997.
[14] Lewis J., Hart E., and Ritchie G., “A comparison of dominance mechanisms and simple mutation on non stationary problems”, In A.E. Eiben, T. Bäck, M. Schoenauer, H.-P. Schwefel (Eds.), Proceedings Of 5th International Conference On Parallel Problem Solving From Nature (PPSN-V), Lecture Notes In Computer Science, Vol. 1498, Springer-Verlag, London, UK, 139-148, 1998.
[15] Dasgupta D. and Mcgregor D.R., “Nonstationary function optimization using the structured genetic algorithm”, In R. Manner R And B. Manderick (Eds.), Proceedings Of The 2nd International Conference on Parallel Problem Solving From Nature (Ppsn-Ii), Elsevier Science, New York, 145-154, 1992.
[16] Klinkmeijer L.Z., De Jong E., and Wiering M. “A serial population genetic algorithm for dynamic optimization problems”, In Y. Saeys, E. Tsiporkova, B. De Baets And Y..Van De Peer (Eds.), Proceedings of the Annual Machine Learning Conference of Belgium and the Netherlands, 41-48, 2006.
[17] Hollstein R.B., “Artificial genetic adaptation in computer control systems”, Doctoral Thesis. University of Michigan, 1971.
[18] Ramsey C.L. and Grefenstette J.J., “Case-based initialization of genetic algorithms”, In S. Forrest (Ed.), Proceedings of the 5th International Conference Genetic Algorithms (Icga’1993), Morgan Kaufmann, San Francisco, 84–91, 1993.
[19] Trojanowski K. and Michalewicz Z., “Evolutionary optimization in non-stationary environments”, Journal of Computer Science and Technology, 1(2):93-124, 2000.
[20] محمدپور م.، پروین ح.، «الگوریتم ژنتیک آشوبگونه مبتنی بر حافظه و خوشهبندی برای حل مسائل بهینه-سازی پویا»، مجله مهندسی برق دانشگاه تبریز، جلد 46، شماره 3، پاییز 1396.
[21] محمدپور م.، مینایی بیدگلی ب.، پروین ح.، «ارائه یک الگوریتم فرااکتشافی جدید مبتنی بر رفتار پرنده تیهو برای حل مسائل بهینهسازی پویا»، مجله محاسبات نرم، جلد 8، شماره 2، ص 65-38، 1398.
[22] Mohammadpour M., Parvin H., and Sina M., “Chaotic genetic algorithm based on explicit memory with a new strategy for updating and retrieval of memory in dynamic environments”, J AI Data Min 6:191–205, 2018 (in press).
[23] Mohammadpour M. and Parvin H., “Genetic algorithm based on explicit memory for solving dynamic problems”, In Journal of Advances in Computer Research Sari Branch Islamic Azad University, 7(2):53-68, 2016.
[24] Parvin H., Nejatian S., and Mohammadpour M., “Explicit memory based ABC with a clustering strategy for updating and retrieval of memory in dynamic environments”, In Applied Intelligence Springer Nature, 2018. doi: 10.1007/s10489-018-1197-z.
[25] Ursem R.K., “Multinational GA optimization techniques in dynamic environments”, In Proceedings of the 2000 Genetic and Evolutionary Computation Conference (Gecco’2000), Morgan Kaufmann, San Francisco, 19-26, 2000.
[26] Park T. and Ryu K.R., “A dual population genetic algorithm with evolving diversity”, In Proceedings of the 2007 IEEE Congress on Evolutionary Computation (Cec’2007), IEEE Press, 3516-3522, 2007.
[27] Park T., Choe R., and Ryu K.R., “Adjusting population distance for the dual-population genetic algorithm”, In Proceedings of The Australian Conference On Artificial Intelligence (Ai’2007), Lecture Notes On Computer Science, Vol. 4830, Springer Verlag, Berlin171-180, 2007.
[28] Park T., Choe R., and Ryu K.R., “Dual population genetic algorithm for nonstationary optimization”, In Proceedings of the 2008 Genetic and Evolutionary Computation Conference (Gecco’2008), Acm, New York, Ny, 1025-1032, 2008.
[29] Yang S., “Constructing test environments for genetic algorithms based on problem difficulty”, In Proceedings of the 2004 IEEE Congress On Evolutionary Computation, Vol. 2, IEEE Press, 2004, 1262 1269.238, 2004.
[30] Martello S. and Toth P., “Knapsack problems: Algorithms and computer implementations”, John Wiley & Sons, New York, 1990.
[31] سلیمی سرتختی ج.، گلی بیدگلی س.، «ارائۀ یک الگوریتم ترکیبی با استفاده از الگوریتم کرم شبتاب، الگوریتم ژنتیک و جستوجوی محلی»، مجله محاسبات نرم، جلد ۸، شماره ۱، ص 28-14، 1398.
[32] Agarwal R., Schuurmans D., and Norouzi M., “An optimistic perspective on offline reinforcement learning”, In International Conference on Machine Learning 119:104-114, 2020.
[33] Cabi S., Colmenarejo S.G., Novikov A., Konyushkova K., Reed S., Jeong R., Zołna K., Aytar Y., Budden D., Vecerik M., Sushkov O., Barker D., Scholz J., Denil M., de Freitas N., and Wang Z., “Scaling datadriven robotics with reward sketching and batch reinforcement learning”, arXiv preprint:1909.12200, 2019.
[34] Castro P.S., Moitra S., Gelada C., Kumar S., and Bellemare M.G., “Dopamine: A research framework for deep reinforcement learning”, 2018. URL: http://arxiv.org/abs/1812.06110.
[35] Bäck T., “Evolutionary algorithms in theory and practice”, Oxford University Press, New York, 1996.
[36] Goldberg D.E. “The design of innovation lessons from and for competent genetic algorithms”, Kluwer Academic Publishers, Norwell, Ma, 2002.
[37] Baluja S., “Population-based incremental learning: A method for integrating genetic search based function optimization and competitive learning”, Technical Report CMU-Cs 94-163, Carnegie Mellon University, USA, 1994.
[38] Harik G.R., Lobo F., and Sastry K., “Linkage learning via probabilistic modeling in the ecga”, In M. Pelikan, K. Sastry, E. Cantú-Paz (Eds.), Scalable Optimization Via Probabilistic Modeling: From Algorithms To Applications, Springer-Verlag New York Inc., Secaucus Inc., Nj, Usa,39-61, 2006.
[39] Bak P., Tang C., and Wiesenfeld K., “Self organized criticality: An explanation of 1/F noise”, Physical Review of Letters, 59(4):381-384, 1987.
[40] Bak P. and Sneppen K., “Punctuated equilibrium and criticality in a simple model of evolution”, Physical Review of Letters, 71(24):4083-4086, 2003.
[41] Fernandes C.M., “Pherographia: Drawing by ants”, Leonardo–Journal of the International Society for the Arts, Sciences and Technology, Mit Press, 2009, In Press.
[42] Fernandes C.M., Merelo J.J., Ramos V., and Rosa A.C., “A selforganized criticality mutation operator for dynamic optimization problems”, In: Proceedings of the 2008 genetic and evolutionary computation conference. ACM, New York, pp. 937–944, 2008.
[43] Fernandes C.M., Laredo J.L.J., Rosa A.C., and Merelo J.J., “The sandpile mutation Genetic Algorithm: an investigation on the working mechanisms of a diversity-oriented and self-organized mutation operator for non-stationary functions”, Applied Intelligence 39: 279–306, 2013.
[44] Tinós R. and Yang S., “A self-organizing RIGA for dynamic optimization problems”, Genet Program Evol Mach 8:255–286, 2007.
[45] Yang S., “Genetic algorithms with memory- and elitismbased immigrants in dynamic environments”, Evol Comput 6(3):385–416, 2008.
[46] Fernandes C.M., Merelo J.J., and Rosa A.C., “A comparative study on the performance of dissortative mating and immigrants’ strategies for evolutionary dynamic optimization”, Inf Sci 181(20):4428–4459, 2011.
[47] Martello S. and Toth P., “Knapsack problems: algorithms and computer implementations”, John Wiley & Sons, New York, 1990.
[48] Jana B., Mitra S., and Acharyya S., “Repository and mutation based particle swarm optimization (rmpso): a new PSO variant applied to reconstruction of gene regulatory network,” Applied Soft Computing, 74:330–355, 2019.
[49] Pampara G. and Engelbrecht A.P., “Self-adaptive quantum particle swarm optimization for dynamic environments,” International Series in Operations Research and Management Science, 272:163–175, 2018.
[50] Dzalbs I. and Kalganova T., "Accelerating supply chains with Ant Colony Optimization across range of hardware solutions," arXiv:2001.08102, 2020.
[51] محمدی ح.، اخوان ع.، «مقایسه عملکرد الگوریتمهای HSA،ICA و PSO بهمنظور حذف انتخابی هارمونیکها در اینورتر چندسطحی آبشاری با وجود منابع DC متغیر»، مجله محاسبات نرم، جلد 3، شماره 2، ص 30-20، 1393.
[52] Parvin H., Minaei B., and Ghatei S., “A new particle swarm optimization for dynamic environments,” Part of the Lecture Notes in Computer Science book series (LNCS, volume 6694), Computational Intelligence in Security for Information Systems pp 293-300, 2014.
[53] Hashemi A. and Meybodi M., "Cellular PSO: A PSO for dynamic environments," Advances in Computation and Intelligence, pp. 422–433, 2009.
[54] Assimi H., Harper O., Xie Y., Neumann A., and Neumann F., “Evolutionary bi-objective optimization for the dynamic chance-constrained Knapsack problem based on tail bound objectives,” 24th European Conference on Artificial Intelligence - ECAI 2020 Santiago de Compostela, Spain, 2020.
[55] Roostapour V., Neumann A., and Neumann F., “On the performance of baseline evolutionary algorithms on the dynamic knapsack problem”, in Parallel Problem Solving from Nature, PPSN XV 2018, Lecture Notes in Computer Science, pp. 158–169. Springer, 2018.
[56] Xie Y., Harper O., Assimi H., Neumann A., and Neumann F., “Evolutionary algorithms for the chance-constrained knapsack problem”, in Proceedings of the Genetic and Evolutionary Computatio Conference, GECCO 2019, pp. 338–346, 2019.
[57] Zhao Z., Gu F., and Cheung Y.M., “Co-operative Prediction Strategy for Solving Dynamic Multi-Objective Optimization Problems,” In: 2020 IEEE Congress on Evolutionary Computation (CEC), 2020, DOI: 10.1109/CEC48606.2020.9185721.
[58] Hu W., Jiang M., Gao X., Tan K.-C., and Cheung Y.-M., "Solving dynamic multi-objective optimization problems using incremental support vector machine", 2019 IEEE Congress on Evolutionary Computation (CEC), pp. 2794-2799, 2019.