Jump to content

CAE82

Members+
  • Posts

    455
  • Joined

  • Last visited

Everything posted by CAE82

  1. The CA formula is not really known @wazzaflow10 (unless you have inside information from SI). I’ve worked it out for a player with a single position and ‘average’ attributes to a pretty good approximation, but it was too complex to work out for multiple positions and it had some flaws for high attribute values (for example, players with say 19 for acceleration and pace) which hints at some non-linearity. Indeed, the ‘boring old’ linear kernel didn’t work well (neither did the polynomial one) and I got the best results using the RBF kernel. Now with the model you just supply the attributes that contribute to CA, position ratings and weaker foot (47 inputs in total) and it does a pretty good job of predicting CA without any a priori knowledge of the attribute weights. I’ve spoken with colleagues who are experts in deep learning and I know about physics-informed models (i.e. providing the model with a priori knowledge of attribute weights) but do not have enough knowledge (or motivation) to try and implement it. I’m not sure what you mean by multicolinearity but if you are suggesting that the inputs are not independent when it comes to CA then I’ve never read or seen anything to suggest that is the case? I’d be interested to hear more. Of course, I’d also be interested to see the outcome of your ‘real experiment’ if you ever find the time or motivation.
  2. Maybe they’ll look more closely at that kind of stuff for FM25. Another clause that needs tightening up is the release fee for Champions League clubs. I spent a while trying to figure out why I couldn’t activate one and finally realised that they are only relevant when the parent club is not in the Champions League themselves. I didn’t find it particularly obvious.
  3. Thanks. Have seen that before. Lots of interesting stuff posted by Seb Wassell, pity he doesn’t seem to post a lot any more. Maybe he left SI. I used to spend a long time on the ‘Developers Posts’ tab going through and reading all their posts to try and glean any insights into the game. But now seems to be mostly off-topic stuff.
  4. I’m not sure you’re correct there @herne79. I’ve never seen an ‘uplift’ in CA after a player gaining new positions (naturally or ‘forced’ through the editor). It usually seems to be the (R)CA stays the same and the attributes redistribute accordingly. But of course I respect and appreciate your insight and will bear it in mind. After more than a quarter of a century playing (CM) FM it’s regretful that the real interest and enjoyment comes from these kinds of toy experiments and I hope that FM24/25 bring in some genuinely new features that actually makes the game interesting again.
  5. It’s not a simple highest weight, even for a DC/MC/SC. For two positions, the CA seems to be the same as the highest CA for each individual position. So if you had a player who was a 120CA DC and a 130CA MC, he would have a 130CA as a D/MC. It seems to be when you add in a 3rd position then the CA changes significantly. But it doesn’t appear to be simply from the highest of each of the three weights. Anyhow, it doesn’t really matter the exact formula but we all know that playing in multiple positions will lead to a larger CA for a given set of attributes, thus leaving less growth until the PA is reached.
  6. An optional fee means that the buying club is not obliged to buy the player at the end of the loan whereas a mandatory fee means they do. In either case, the selling club must accept. I guess the specified fee must be a straight cash bid and then it would be accepted. Possibly the higher bid was a lower amount with add-ons? In any case, seems strange and likely a bug. I saw something equally strange recently. I matched a release fee but with the transfer date at the end of the season and it was rejected. It was only accepted with an immediate date.
  7. A simple tool to test is here: https://replit.com/@fmcae/FMCAP?v=1
  8. Thanks for your comment @enigmatic. I think the attributes are actually 1-100 based on using CheatEngine to load the RAM in FM and having a poke around. Could be wrong though. Unfortunately I do not think the weights work like that (highest weight) based on my testing, I could not get decent results trying lots of different weight combinations. I do think the positions are grouped. So a DL becoming a D/WBL does not make a big difference due to the similarity in the positions. Similar if a DL becomes a DRL; playing on both sides doesn't seem to cause an obvious increase in CA either. In the end I gave up and hence used ML . I actually only included the features that are known to contribute to CA; so the ML model does not include things like Determination, Flair and so on. I also think there is some non-linearity in the model for high-valued attributes i.e. the weights are not constant for the whole 1-20 range. It seems that going from 12-16 does not cause as big a CA increase as going from 16-20 for example. Based on my earlier experiments I know that 6 in all attributes corresponds to a CA of 1 and 16's in all attributes corresponds to a CA of 200. So I presume that the range of 16-20 is treated somewhat differently.
  9. Background I decided to see if I could use machine learning (ML) to accurately predict the current ability (CA) of players based upon their attribute values and position ratings. I'd previously attempted to calculate CA using the attribute weights found in the pre-game editor. This was relatively successfully but only worked well for players with a single, natural position. When a player was able to play multiple positions, I wasn't able to analytically figure out how the different position weights combined to create the overall CA. However, this type of task is what machine learning algorithms excel at. Machine Learning The task is to supply a dataset of input values (or 'features' in ML parlance) as well as a corresponding output value (or 'target'). The ML algorithm will then learn how to map the features to the target. Once the model is trained, you can then provide the model with a set of features and it will predict the target value. In terms of the Football Manager current ability task, we need to supply a sample of players to the ML algorithm (their attribute values and position ratings) along with their current ability. The model will then be trained on this sample of players and will learn how the attributes values and positions map to current ability. Once the model is trained, we can then feed the attribute and position values of a player and the model will predict the current ability. The particular task is a regression problem - if you are interested in more of the background and implementation details then you can search for 'support vector regression' (SVR). This is a type of 'supervised machine learning' and all this means is that we provide the model with the training data from which it learns rather than the algorithm 'teaching itself'. I wrote the code in Python using the 'scikit-learn' machine learning library. Training Data By using a modified version of a 3rd party scouting tool, I was able to export the players along with their attribute values, position ratings and current ability from a save game. This amounted to around 28,000 players. In the first instance I have focussed on outfield players so after filtering out the goalkeepers, I was left with around 25,000 players. This sample of players is further (randomly) split into two groups, 75% of the players act as the training data (the players from which the SVR algorithm learns from) and the remaining 25% acting as 'unseen training data'. These are the players that are used to test the accuracy of the model. A histogram of the CA distribution of the players is shown below. Note the very few players with high values of CA. This has implications later for predicting the CA of top players; they are essentually outliers to the model so there is not much data for the model to be trained on. Model Accuracy Surprisingly, the model only took around 5 minutes to train on my fairly standard laptop. After playing with the model parameters, I was able to obtain a model accuracy of 98%. A plot showing the target CA against the predicted CA is shown below. Each blue circle represents a player and the red line represents the situation in which the predicted CA is exactly equal to the target CA. In an ideal world, all the blue circles would lie on that red line. Note the small group of players at 175+. Despite the fact there are only a few of them, the model still accurately predicts their CA. Examples I used the model to predict the CA of a few specific players. The first player I tested was Ridle Baku. He is a player with multiple positions which has a strong impact on his CA. His recommended CA in the test save is 152. The ML model predicted a CA of 151. Very promising! The next player to test was Kevin De Bruyne. He too can play multiple positions, has a strong weaker foot and is one of the 'outliers' at the top-end of the current ability range. His recommended CA in the test save is 186. The ML model predicted a CA of 180. Not bad for such an outlier! I hope you find this article interesting and maybe it will help you to better understand how machine learning can be used. CAE
  10. You need to multiply each attribute value by the corresponding weight in the table shown in the FM Scout article about CA and then sum them up. Then divide by the sum of all the weights for that position. That gives you a weighted 'average attribute value' - let's call this 'x'. There is a linear relationship between this value and the CA as shown in the game. CA = 20*x - 120 Note that this assumes just a single position. I wasn't able to figure out how the CA depends on the familiarity of the different positions. It is not obvious how the attribute weights are affected.
  11. Short link to tactic spreadsheet: https://bit.ly/knap2023 How does scoring work? The overall score is determined by the following: 64% points earned (relative to maximum points obtained by a tactic in the list) 30% goal difference (relative to maximum goal difference obtained by a tactic in the list) 3% for winning the UCL 2% for winning the FA Cup 1% for winning the League Cup
×
×
  • Create New...