تعداد نشریات | 21 |
تعداد شمارهها | 593 |
تعداد مقالات | 8,812 |
تعداد مشاهده مقاله | 66,759,717 |
تعداد دریافت فایل اصل مقاله | 7,325,206 |
A non-monotone Hestenes-Stiefel conjugate gradient algorithm for nonsmooth convex optimization | ||
International Journal of Nonlinear Analysis and Applications | ||
مقاله 2، دوره 15، شماره 3، خرداد 2024، صفحه 11-20 اصل مقاله (491.85 K) | ||
نوع مقاله: Research Paper | ||
شناسه دیجیتال (DOI): 10.22075/ijnaa.2019.16973.1899 | ||
نویسندگان | ||
Ahmad Abouyee Mehrizi؛ Reza Ghanbari* | ||
Faculty of Mathematical Sciences, Department of Applied Mathematics, Ferdowsi University of Mashhad, Mashhad, Iran | ||
تاریخ دریافت: 22 دی 1397، تاریخ بازنگری: 06 تیر 1398، تاریخ پذیرش: 09 مرداد 1398 | ||
چکیده | ||
Here, we propose a practical method for solving nonsmooth convex problems by using conjugate gradient-type methods. The conjugate gradient method is one of the most remarkable methods to solve smooth and large-scale optimization problems. As a result of this fact, We present a modified HS conjugate gradient method. In the case that we have a nonsmooth convex problem, by the Moreau-Yosida regularization, we convert the nonsmooth objective function to a smooth function and then we use our method, by making use of a nonmonotone line search, for solving a nonsmooth convex optimization problem. We prove that our algorithm converges to an optimal solution under standard condition. Our algorithm inherits the performance of HS conjugate gradient method. | ||
کلیدواژهها | ||
Nonsmooth convex optimization؛ Conjugate gradient method؛ nonmonotone line search؛ Global convergence | ||
مراجع | ||
[1] M. Al-Baali, Y. Narushima, and H. Yabe, A family of three-term conjugate gradient methods with sufficient descent property for unconstrained optimization, Comput. Optim. Appl. 60 (2015), 89–110. [2] K. Amini, M. Ahookhosh, and H. Nosratipour, An inexact line search approach using modified nonmonotone strategy for unconstrained optimization, Numer. Algorithms 66 (2014), 49–78. [3] A. Astorino, A. Fuduli, and E. Gorgone, Nonsmoothness in classification problems, Optim. Metho. Software 23 (2008), 675–688. [4] A. Auslender, Numerical methods for nondifferentiable convex optimization, Math. Programm. 30 (1987), 102–126. [5] S. Babaie-Kafaki and R. Ghanbari, The Dai–Liao nonlinear conjugate gradient method with optimal parameter choices, Eur. J. Oper. Res. 234 (2014), 625–630. [6] A.M. Bagirov, L. Jin, N. Karmitsa, A.Al. Nuaimat, and N. Sultanova, Subgradient method for nonconvex nonsmooth optimization, J. Optim. Theory Appl. 157 (2013), 416–435. [7] A.M. Bagirov, B. Karasozen, and M. Sezer, Discrete gradient method: Derivative-free method for nonsmooth optimization, J. Optim. Theory Appl. 137 (2007), 317–334. [8] A. Bagirov, N. Karmitsa, and M.M. Makela, Introduction to Nonsmooth Optimization: Theory, Practice and Software, Springer International Publishing, 2014. [9] P.S. Bradley, U.M. Fayyad, and O.L. Mangasarian, Mathematical programming for data mining: Formulations and challenges, Inf. J. Comput. 11 (1999), 217–238. [10] J.V. Burke, A.S. Lewis, and M.L. Overton, A robust gradient sampling algorithm for nonsmooth, nonconvex optimization, SIAM J. Optim. 15 (2005), 751–779. [11] J.V. Burke and M. Qian, On the superlinear convergence of the variable metric proximal point-algorithm, using Broyden and BFGS matrix secant updating, Math. Programm. 88 (2000), 157–181. [12] E. Carrizosa and D.R. Morales, Supervised classification and mathematical optimization, Comput. Oper. Res. 40 (2013), 150–165. [13] X. Chen and M. Fukushima, Proximal quasi-Newton methods for nondifferentiable convex optimization, Math. Programm. 85 (1999), 313–334. [14] W.Y. Cheng, A two-term PRP based descent method, Numer. Funct. Anal. Optim. 28 (2007), 1217–1230. [15] A. Conn, N. Gould, and P. Toint, Trust Region Methods, Society for Industrial and Applied Mathematics, 2000. [16] Y.H. Dai, Convergence properties of the BFGS algorithm, SIAM J. Optim. 13 (2002), 693–701. [17] Y.H. Dai and L.Z. Liao, New conjugacy conditions and related nonlinear conjugate gradient methods, Appl. Math. Optim. 43 (2001), 87–101. [18] M. Fukushima and L. Qi, A globally and superlinearly convergent algorithm for nonsmooth convex minimization, SIAM J. Optim. 6 (1996), 1106–1120. [19] N.I.M. Gould, C. Sainvitu, and P.L. Toint, A filter-trust-region method for unconstrained optimization, SIAM J. Optim. 16 (2005), 341–357. [20] W.W. Hager and H. Zang, A survey of the nonlinear conjugate gradient methods, Pacific J. Optim. 2 (2006), 35–58. [21] P. Hansen and B. Jaumard, Cluster analysis and mathematical programming, Math. Programm. 79 (1997), 191–215. [22] J.B. Hiriart-Urruty and C. Lemarechal, Convex Analysis and Minimization Algorithms, Springer, Berlin, 1993. [23] A. Jain, M. Murty, and P. Flynn, Data clustering: A review, ACM Comput. Surveys 31 (1999), 264–323. [24] T. Karkkainen and E. Heikkola, Robust formulations for training multilayer perceptrons, Neural Comput. Appl. 16 (2004), 837–862. [25] T. Karkkainen and K. Majava, Semi-adaptive, convex optimization methodology for image denoising, IEEE Proc.-Vision Image Signal Process. 152 (2005), 553–560. [26] N. Karmitsa, Diagonal bundle method for nonsmooth Sparse optimization, J. Optim. Theory Appl. 166 (2015), 889–905. [27] N. Karmitsa and M.M. Makela, Adaptive limited memory bundle method for bound constrained large-scale nonsmooth optimization, Optimization 59 (2010), 945–962. [28] Q. Li, Conjugate gradient type methods for the nondifferentiable convex minimization, Optim. Lett. 7 (2013), 533–545. [29] S. Lu, Z. Wei, and L. Li, A trust region algorithm with adaptive cubic regularization methods for nonsmooth convex minimization, Comput. Optim. Appl. 51 (2012), no. 2, 551-573. [30] M.M. Makela and T. Mannikko, Numerical solution of nonsmooth optimal control problems with an application to the continuous casting process, Adv. Math. Sci. Appl. 4 (1994), 491–515. [31] M. M. Makela, T. Mannikko, and H. Schramm, H Application of nonsmooth optimization methods to continuous casting of steel, DFG-Schwerpunktprogramm ”Anwendungsbezogene Optimierung und Steuerung”, Report 421, Universitat Bayreuth, 1993. [32] M. Mehiddin Al-Baali, E. Spedicato, and F. Maggioni, Broyden’s quasi-Newton methods for a nonlinear system of equations and unconstrained optimization: A review and open problems, Optim. Meth. Software 29 (2014), 937–954. [33] K. Miettinen, M.M. Makela, and T. Mannikko, Optimal control of continuous casting by nondifferentiable multiobjective optimization, Comput. Optim. Appl. 11 (1998), 177–194. [34] E.S. Mistakidis and G.E. Stavroulakis, Nonconvex Optimization in Mechanics: Smooth and Nonsmooth Algorithms, Heuristics and Engineering Applications by the F. E. M., Kluwer Academic Publishers, Dordrecht, 1998. [35] J.J. Moreau, P.D. Panagiotopoulos, and G. Strang, Topics in Nonsmooth Mechanics, Birkh¨auser Verlag, Basel, 1988. [36] Y. Narushima, H. Yabe, and J.A. Ford, A three-term conjugate gradient method with sufficient descent property for unconstrained optimization, SIAM J. Optim. 21 (2011), 212-230. [37] A. Nedic and A. Ozdaglar, Subgradient methods for saddle-point problems, J. Optim. Theory Appl. 142 (2009), 205–228. [38] Y.U. Nesterov, Excessive gap technique in nonsmooth convex minimization, SIAM J. Optim. 16 (2005), 235-249. [39] Y.U. Nesterov, Smooth minimization of nonsmooth functions, Math. Programm. 103 (2005), 127–152. [40] Y.U. Nesterov, Primal-dual subgradient methods for convex problems, Math. Programm. 120 (2009), 221–259. [41] J. Outrata, M. Kocvara, and J. Zowe, Nonsmooth Approach to Optimization Problems with Equilibrium Constraints: Theory, Applications and Numerical Results, Kluwer Academic Publisher, Dordrecht, 1998. [42] A.I. Rauf and M. Fukushima, Global convergent BFGS method for nonsmooth convex optimization, J. Optim. Theory Appl. 104 (2000), 539–558. [43] N. Sagara and M. Fukushima, A trust region method for nonsmooth convex optimization, J. Ind. Manag. Optim. 1 (2005), 171–180. [44] C. Sagastizabal and M. Solodov, An infeasible bundle method for nonsmooth convex constrained optimization without a penalty function or a filter, SIAM J. Optim. 16 (2005), 146–169. [45] H. Schramm and J. Zowe, A version of the bundle idea for minimizing a nonsmooth function: conceptual idea, convergence analysis, numerical results, SIAM J. Optim. 2 (1992), 121–152. [46] K. Sugiki, Y. Narushima, and H. Yabe, Globally convergent three-term conjugate gradient methods that use secant conditions and generate descent search directions for unconstrained optimization, J. Optim. Theory Appl. 153 (2012), 733–757. [47] W. Sun, Nonmonotone trust region method for solving optimization problems, Appl. Math. Comput. 156 (2004), 159–174. [48] I.H. Witten and E. Frank, Data mining: Practical Machine Learning Tools and Techniques, 2nd ed., Elsevier Inc., Amsterdam, 2005. [49] H. Yabe, H. Ogasawara, and M. Yoshino, Local and superlinear convergence of quasi-Newton methods based on modified secant conditions, J. Comput. Appl. Math. 205 (2007), 617–632. [50] G. Yuan, Z. Weia, and G.A. Li, Modified Polak–Ribiere–Polyak conjugate gradient algorithm for nonsmooth convex programs, J. Comput. Appl. Math. 255 (2014), 86–96. [51] J.Z. Zhang, N.Y. Deng, and L.H. Chen, New Quasi-Newton equation and related methods for unconstrained optimization, J. Optimi. Theory Appl. 102 (1999), 147–167. [52] L. Zhang, W. Zhou, and D.H. Li, A descent modified Polak-Ribiere-Polyak conjugate gradient method and its global convergence, IMA J. Numer. Anal. 26 (2006), 629–640. [53] L. Zhang, W. Zhou, and D.H. Li, Global convergence of a modified Fletcher–Reeves conjugate gradient method with Armijo-type line search, Numer. Math. 104 (2006), 561–572. | ||
آمار تعداد مشاهده مقاله: 15,037 تعداد دریافت فایل اصل مقاله: 211 |