Jump to content
Sports Interactive Community
91427

Decisions is a broken attribute and appears to have a negative effect on team performance

Neil Brock

Just to clarify as the opening post and point of this thread has been debunked as shown here following feedback from SI's Ed

We appreciate everyone who contributed and anyone who conducts certain tests with the game. Just remember it's always worth asking us if you can, as we can give advice on how to conduct accurate experiments :thup: 

 

Message added by Neil Brock

Recommended Posts

I really hope the fella who did that experiment have reported it to SI for investigation. 

Share this post


Link to post
Share on other sites
5 minutes ago, Spurs08 said:

You know what? **** that. SI can read too. They've even got the file there, available for download, if they want to test it themselves, and an email address to get in touch with the people responsible for the experiment. It shouldn't be on paying customers to do their work for them, and this is a major issue that has zero excuse to even have made it into the final version of the game. It actually explains a lot of the seeming issues we've seen - top teams struggling for pass completion, not being dominant enough, and constantly squandering chances where there's a clear passing option to give someone a very good shot.

Wow, angry much? Such rage for that comment? You need to talk to someone? Cry a bit on a shoulder? Take a walk and calm down, then we can have a decent discussion in here. This does not "explain" anything. It might be a cause for issues, but I haven't had problems with pass completion, nor dominating games. Following your logic, it's no issue at all since I don't have it. Different methods will cause different results. You are one of those people who declare guilty without reviewing any evidence, aren't you? A claim of something wrong and out comes the pitchforks and torches.

Back on topic. What if this doesn't replicate for SI? What if this was a one-off? I haven't tried, so I wouldn't know. SI runs a ton of soak tests to catch things like this, so if the author of this experiment have managed to set up a game that catches something they haven't, I'm sure they would like to know the details. This might be a huge issue and something that should have been caught if it happens regularly. This is why I hope the game in question is reported for further investigation. 

Share this post


Link to post
Share on other sites
15 minutes ago, Spurs08 said:

You know what? **** that. SI can read too. They've even got the file there, available for download, if they want to test it themselves, and an email address to get in touch with the people responsible for the experiment. It shouldn't be on paying customers to do their work for them, and this is a major issue that has zero excuse to even have made it into the final version of the game. It actually explains a lot of the seeming issues we've seen - top teams struggling for pass completion, not being dominant enough, and constantly squandering chances where there's a clear passing option to give someone a very good shot.

They can read and they will probably investigate it as soon as they're aware of it.
That's what the bugs forums are for. To immediatetly make them aware of issues and potential issues.
It's not doing the work for them. It's helping them out for those who choose to do so. Do we have to? Of course not. But it could be beneficial for everyone involved reporting it there, as it'll get looked at faster, compared to a random thread in GD.

The stats and evidence provide certainly indicate that it is a major issue.
Zero excuses though? How many were aware of it being a potential issue before this was presented - as a factual potential issue?
Maybe they are aware of something but the issue might be a bit more complicated than just that "tiny" problem.
Maybe they don't have a clue about this being a potential issue at all.

Share this post


Link to post
Share on other sites
32 minutes ago, roykela said:

Maybe they are aware of something but the issue might be a bit more complicated than just that "tiny" problem.

I'd expect that it definitely is much, much more complicated than a "tiny problem", with implications and knock on effects all across the ME. Decisions is (or at least should be, according to my understanding) the most all-encompassing attribute of them all, one that nearly every action on the pitch is more or less tied to. The first table on that page is alarming, the second is shocking. Something seems to be very wrong here.

Share this post


Link to post
Share on other sites
3 minutes ago, kozmik said:

I'd expect that it definitely is much, much more complicated than a "tiny problem", with implications and knock on effects all across the ME. Decisions is (or at least should be, according to my understanding) the most all-encompassing attribute of them all, one that nearly every action on the pitch is more or less tied to. The first table on that page is alarming, the second is shocking. Something seems to be very wrong here.

Completely agree.

Share this post


Link to post
Share on other sites
19 minutes ago, wicksyFM said:

Im not sure what to think of this. I build my teams around the decision attribute and i have been doing fine.

Anecdotal evidence is more or less useless when we have an actual detailed examination into the effects the attribute has

Share this post


Link to post
Share on other sites
11 minutes ago, herne79 said:

The worst thing we can do is have knee jerk reactions.

The best thing we can do is get SI to take a look at the data.

I've seen examples before of data that supposedly shows how things are broken and then when it's actually analysed it turns out things have been missed or other things go on under the hood that the testers weren't aware of.  I'm not saying that's the case here - there may indeed be something up with Decisions - but let SI take a look first before we decide something is broken.

Does it look concerning?  Yes.  Is it actually broken?  No idea, let SI investigate.

@Nic Madden @Neil Brock is this something SI can look into as is or does a Bug Report need to be raised with the data files attached?

For information; there is a thread in the bugs forum about this issue opened by a different user:
 

 

Share this post


Link to post
Share on other sites

I had unimpressive but not that unimpressive results with a similar FM17 experiment targeting the Decisions attribute (the teams with high decision midfield on average did better, the teams with middling decisions on average did slightly worse, and the teams with low decision midfield on average did worse, but not in rank order, and some of the low Decisions players had individually outstanding seasons.

To reduce the effect of luck I reset morale after every game, used Swiss League as a base so sides played each other four times, and had technically sound players with high Concentration and good personalities, mediocre strikers and decent defenders 

My working hypothesis  is that (i) the teams with high decisions have a much higher CA to allow for that, so the AI judges them to be favourites (ii) this has effect on AI formations and instructions even when managers are set as identical (mine were) and morale effects from pre-match expectations (iii) this effect is more significant than anything Decisions does in game.

This would also explain why non CA weighted attributes like Aggression are so highly ranked, and highly weighted but not useful in isolation ones like Agility in the other test

I still have no idea what type of decisions Decisions is supposed to boost and whether that's actually always a good thing (maybe not if it discourages risk taking in a player who's technically far too good for the league, for example). It's possible the attribute works perfectly and just produces occasional errors with low values or only really takes effect if high tempos force fast decision making, but it seems massively over-weighted CA wise, especially for midfielders. To put things into perspective, my high Decisions players were good Premier League standard by the CA algorithm, my low Decisions players were League One standard, but they both performed at fairly similar levels.

Edited by enigmatic

Share this post


Link to post
Share on other sites
14 minutes ago, roykela said:

For information; there is a thread in the bugs forum about this issue opened by a different user:
 

 

Great :thup:.  Hopefully this is something SI can now look into and either ease people's concerns or use to make improvements.

Share this post


Link to post
Share on other sites

The experiment may be completely flawed in ways we have overlooked, but for the past two FM versions I have consistently questioned the decision making of elite forward players. This has especially been the case when it comes to taking long shots rather than playing in unmarked teammates.

Looking forward to the response. 

Share this post


Link to post
Share on other sites
1 hour ago, rdbayly said:
4 hours ago, 91427 said:

What worries me is the attribute could easily have been broken for years and it's only because of testing by players that SI will learn about it. SI really need to step up their game with the ME as a whole tbh. You shouldn't be able to just play with 3 strikers and massively over perform, you shouldn't be able to get very average strikers to score a goal a game by putting them in a good system, you shouldn't have players like Messi dribble straight at and into players who were 10 yards away when they picked up the ball, you shouldn't have a 3D setting that's visually outclassed by LMA Manager 2007 and you absolutely shouldn't have an attribute that works the opposite way it's intended

 

Yes the three strikers over performance is crazy. I used a 433 before I heard of the issue and my team was unbeatable. Thrashing everyone on the way to winning every trophy I was in almost irrespective of which team I put out. 

Plus the Manchester City team I was competing against only bought a 37 year old centre back on a free in the summer transfer window so even when I changed formations in the second season there is no competition so I’ve lost interest. 

It seems really easy to manage Man Utd this time round! Not sure if it’s a sign that mourinho is doing well or if the squad suits the ME.

Share this post


Link to post
Share on other sites
9 minutes ago, BigRoboCrouch said:

"SI needs to take a look and investigate"

Are you really that naive that you think SI doesnt know its own game? The game they test internally all year long?

Codemasters released F1 2017 with a bug that means your career teammate doesn't upgrade at the same rate when you upgrade. Along with multiple multiplayer game breaking bugs. 

 

They said 'we can't do anything, we'll fix it for next year'

Do not over credit game developers, they seem to get worse and worse every year. EA are gouging everyone on microtransactions, Star Citizen is vapourware, DayZ is still in alpha FOUR YEARS after being in early access on steam. 

 

The gaming industry is fundamentally broken, because developers cash in on early releases because gamers like us are too impatient to wait for a finished game, and developers are too greedy to finish things first. 

I'm not saying SI are bad, there are good devs out there, but question things more.

Edited by Oenone87

Share this post


Link to post
Share on other sites

Furthermore SI are the kind of company that would (hot)fix such an issue if was seen to be true (see the recent scouting costs issues), so let's let them investigate it and see shall we?

Share this post


Link to post
Share on other sites

Maybe i didn't express myself clearly enough in the other post, and i apologise for that.

What I meant is that if an issue as big as an attribute being broken is real, it's naive to presume they aren't already aware of it, and call for SI's reply.

After all it's their own game.

Share this post


Link to post
Share on other sites

They are running soaks to the test game intensiv on various aspects of game play. But this game has become that complex that there is always a chance to miss something. But if there really is something, they will fix it as soon as the could.

Share this post


Link to post
Share on other sites

I guess this hugely bolsters the idea that positional/role familiarity is a worthless tactical guideline compared to strength with the appropriate attributes as whatever hit a player takes to his decision making attribute might even help him somehow...

Share this post


Link to post
Share on other sites

While it is possible the decisions attribute is broken when simulating match results. It seems to work fine in the match engine. I checked my season stats v/s  decision attribute. Guess what? the decision stat lines up perfectly with the number of key passes across the season for the player. (The technique and first touch attributes seem to help too.) 

My take is the test is broken in two ways.

1. It assumes better decisions is better for every player. This is probably not true. A player with high decisions is going to try some ambitious play , unachievable without high technique, first touch, passing attributes. I.e. the test creates armchair geniuses with no ability to actually achieve their decision.

2. Testing attributes in isolation is not a good idea. In the top teir AC milan team i am playing the max decision attribute i see is 15 . These players have an average of 15 for passing, technique, first touch. Compare with the test where avg attributes are 10 and decision is 20.

Edited by fmnatic

Share this post


Link to post
Share on other sites
1 minute ago, fmnatic said:

While it is possible the decisions attribute is broken when simulating matches. It seems to work fine in the match engine. I checked my season stats v/s  decision attribute. Guess what? the decision stat lines up perfectly with the number of key passes across the season for the player. (The technique and first touch stats seem to help too.) 

My take is the test is broken in two ways.

1. It assumes better decisions is better for every player. This is probably not true. A player with high decisions is going to try some ambitious play , unachievable without high technique, first touch, passing attributes. I.e. the test creates armchair geniuses with no ability to actually achieve their decision.

2. Testing attributes in isolation is not a good idea. In the top teir AC milan team i am playing the max decision attribute i see is 15 . These players have an average of 15 for passing, technique, first touch. Compare with the test where avg stats are 10 and decision is 20.

It is perhaps possible that the decision attribute dictates what a player SHOULD do in a situation which could be very incongruent with what the player is actually capable of doing, though it does seems sensible to infer that good decision making would lead a player to realize when it would be unwise to attempt something beyond their skill level...

Share this post


Link to post
Share on other sites

I had a quick look at this it seems the tests were performed with the quick match engine which tends to rely on CA more than anything. For accurate tests the detail level for the Bermudan league needs setting to "All".

I'll take a look and see if the QM has some flaw regarding decisions somehow, but in leagues where a user is playing all matches are played with the full match engine anyway.

Share this post


Link to post
Share on other sites

I hope SI take a deep look into this issue, this brings the question of elite Mid fielders/ Forwards coming into box and having the shooting just out side from box or in it but shooting way ward particularly to wrong direction. Is that Decision stat making them to push and due to their Bad decision making (looks like its inverted) is causing them to shoot unnecessarily.

 

 

Share this post


Link to post
Share on other sites

Exactly how does the match engine determine a better decision? Let's say a striker with the ball near penalty box and only one defender is in front of him and he has 4 options
1. Dribble pass the defender and shoot.
2. Dribble pass the defender and dribble pass the gk as well then shoot.
3. Pass to AML who is an average player with winger role but nearer to goal post.
4. Pass to AMR who is a very good player with inside forward role but farther to goal post.

So in this case, how does the decision attribute works to make the decision?
 

Edited by edk77

Share this post


Link to post
Share on other sites

@Bluesoul vs. The Bearodactyl

As the post above me raised an important point did you use IGE to make sure attributes wont alter by freezing them before starting the simulation?

Or did you left them by changing decisions to 1 or 20 and left the game to alter the attributes if so how they improved / decreased and do they have drastic changes. 

 

Share this post


Link to post
Share on other sites
2 minutes ago, ferrarinseb said:

@Bluesoul vs. The Bearodactyl

As the post above me raised an important point did you use IGE to make sure attributes wont alter by freezing them before starting the simulation?

Or did you left them by changing decisions to 1 or 20 and left the game to alter the attributes if so how they improved / decreased and do they have drastic changes. 

 

The latter. I didn't freeze attributes because I couldn't find a way to do so without going player-by-player for 700+ players. I like finding results...but there are limits. :lol: The changes are never drastic, and they're consistent which is helpful. A player will normally see their attributes decay by 1 to 2 points over a season when their PA and CA are equal, at the age of 26 with a staff of average coaching and training ability. The attributes rarely if ever increase by the end of season. Players with Decisions 1 vs. 20 both see attributes decay, I was concerned that perhaps settings Decisions to 1 would cause other attributes to increase, but that was not the case.

Share this post


Link to post
Share on other sites
12 minutes ago, Bluesoul vs. The Bearodactyl said:

The latter. I didn't freeze attributes because I couldn't find a way to do so without going player-by-player for 700+ players. I like finding results...but there are limits. :lol: The changes are never drastic, and they're consistent which is helpful. A player will normally see their attributes decay by 1 to 2 points over a season when their PA and CA are equal, at the age of 26 with a staff of average coaching and training ability. The attributes rarely if ever increase by the end of season. Players with Decisions 1 vs. 20 both see attributes decay, I was concerned that perhaps settings Decisions to 1 would cause other attributes to increase, but that was not the case.

Well that's surprising and devastating to say the least. It shouldn't be the case as a decrease for one attribute even in normal game would make it up by increasing another. 

Share this post


Link to post
Share on other sites
7 hours ago, fmnatic said:

1. It assumes better decisions is better for every player. This is probably not true. A player with high decisions is going to try some ambitious play , unachievable without high technique, first touch, passing attributes. I.e. the test creates armchair geniuses with no ability to actually achieve their decision.

I would think the ambitious play is more tied to the likes of flair and vision. As I understand the decisions attribute, it should enable the players to be able to choose the best course of action more often, in any and all situations.

In fact, would it be possible that this contributes in the poor performances for the high decisions teams in the tests? Maybe they don't create enough because these players understand their limitations better and will too often go for the safer pass rather than attempting the ambitious pass they deem themselves unlikely to be able to pull off?

Share this post


Link to post
Share on other sites
7 hours ago, edk77 said:

So in this case, how does the decision attribute works to make the decision?
 

The match engine could possibly just simulate future outcomes, and then go with a good/better outcome.

Share this post


Link to post
Share on other sites
37 minutes ago, sporadicsmiles said:

Can I make an observation as a scientist used to performing experiments where you try to isolate a single variable from a large pool of variables?

The problem is that both in the real world and in FM, it is often not possible to change a single thing in isolation from all other variables. In this case, the creator of the article has done a good job (and put a lot of effort) into changing systematically a single attribute. However, this methodology says nothing about how the attributes interact with one another. That is true of any experiment like this, and it is also true in the real world. I work in science, and we have to put a whole load of effort into proving that we are indeed isolating a single variable and measuring its effect only. For example, if I was going to try to show effect X is caused by variable Y by technique Z on system A, I would also run an experiment on system B - which is as closely related to system A as possible - that also contains variable Y, but should not exhibit effect X when probed by technique Z. That way, I can show that effect X is not caused by any of the innumerable other variables shared by A and B, and can conclude safely that it is variable Y causing effect X. This is, of course, still not perfect, but I hope it serves to illustrate a point.

On the experiment here, there is no such "blank" experiment. Such an experiment would have been to make 36 absolutely identical sides, and run the simulation X number of times. What you would want to see there is that the winning percentages are totally random, since there should be no effect if everything is the same. This would be a great baseline, and not so difficult to set up. I may be wrong on the next point, but it would also be very important to run this test many, many, many times, resetting the database each time, in order to be certain your statistics are not a matter of fluke. These two things are utterly vital in order to ensure that your method is actually suitable. As I also tend to err on the side of caution, I would also suggest performing an identical experiment to the one where decisions run from 1 to 20, but with a different value for each of the other attributes. I would in fact run one where they were all set to 1, and one where they are all set to 20. This should tell you how decisions interacts with all the other variables. To be even more cautious, I would run all of these experiments (including the original one you did on decisions) on Bravery, which caused the most wins. If you expect that bravery follows a linear scale (1 = very important, 20 = very important), then you should see a clear correlation between winning and bravery. After that, I think I would be convinced by any interpretation.

Just to be clear, I am not doubting the validity of the results, because results are what they are, and the author of that piece has done some sterling work. I do not, however, think we have enough data to make any firm conclusions from what has been found. It is an interesting effect, but much more work would be required to understand. If I had the time and the computer this Christmas, I might well set up these experiments, but sadly I do not. I would be happy to collaborate on the data analysis with anyone who does want to run them though!

 

This guy gets it.

There's not a problem with the experiment itself, but you can't really draw some of the conclusions that have been drawn from it so far.  My first thought on reading it was that while it shows something when you're saying that all players have 10 for everything but Decisions, that doesn't tell you how the attribute behaves in less strict circumstances.  If there is a problem with the attribute, is it exacerbated by other attributes being higher/lower?  On the other hand, is the "problem" being partially caused by such sterile conditions?  

To be fair, the amount of time and resources you'd have to pour in as a player (ie without the tools SI have) to get anything meaningful out of those on those counts is probably far too high to be worthwhile.

Share this post


Link to post
Share on other sites
44 minutes ago, sporadicsmiles said:

On the experiment here, there is no such "blank" experiment. Such an experiment would have been to make 36 absolutely identical sides, and run the simulation X number of times. What you would want to see there is that the winning percentages are totally random, since there should be no effect if everything is the same. This would be a great baseline, and not so difficult to set up. I may be wrong on the next point, but it would also be very important to run this test many, many, many times, resetting the database each time, in order to be certain your statistics are not a matter of fluke. These two things are utterly vital in order to ensure that your method is actually suitable. As I also tend to err on the side of caution, I would also suggest performing an identical experiment to the one where decisions run from 1 to 20, but with a different value for each of the other attributes. I would in fact run one where they were all set to 1, and one where they are all set to 20. This should tell you how decisions interacts with all the other variables. To be even more cautious, I would run all of these experiments (including the original one you did on decisions) on Bravery, which caused the most wins. If you expect that bravery follows a linear scale (1 = very important, 20 = very important), then you should see a clear correlation between winning and bravery. After that, I think I would be convinced by any interpretation.

I too shared this criticism of the test. Changing decisions in isolation may not be a useful test, if the match engine uses it conjunction with other variables. 

For me the true concern raised by bluesoul's work is not that teams with decisions 20 are performing badly. (This could just be the outcome of other attributes in conjunction with decisions.)

For me the real concern is lowering decisions to 1 , across a real database team , creates a better performing team.

Share this post


Link to post
Share on other sites

Further a quick database search for players with decisions>18 shows this is accompanied by high other attributes (typically 5-6 attributes >17)  in the database. So the underperformance of 20 decision team is not concerning.

However the outperformance of the decisions=1 team , is really a concern. Decisions attribute tends to rise with age, so the older players may be underperforming.

Being a member of the play the kids brigade, i may actually be gaining from this. :)

Edited by fmnatic

Share this post


Link to post
Share on other sites
8 minutes ago, fmnatic said:

I too shared this criticism of the test. Changing decisions in isolation may not be a useful test, if the match engine uses it conjunction with other variables. 

For me the true concern raised by bluesoul's work is not that teams with decisions 20 are performing badly. (This could just be the outcome of other attributes in conjunction with decisions.)

For me the real concern is lowering decisions to 1 , across a real database team , creates a better performing team.

While I agree, playing devil's advocate instead, could the latter suffer from the same issue you've originally mentioned?  Is the flat rate of attributes causing players with great decision making to underperform, and those with terrible decision-making to overperform, purely because of this "unrealistic" attribute spread?  Obviously that could still be a bug, but maybe not a concerning one.  Is poor decision making leading them to rely more on their instructions than "good" decision making?  Rhetorical questions of course, and it looks like SI will take a look, they'll be able to draw more accurate conclusions.

Share this post


Link to post
Share on other sites

So I've got a bit into the season with the original data file (each team with a different attribute maxed out) using the full match, Decisions is near the bottom of the table but not rock bottom like in the quick match.

I couple of observations - decisions has a high weighting s having this seems to be lowering the other attributes more, almost every other attribute is an 8, whereas for the most the other teams I looked most other attributes were 9, accumulatively that's a fairly big difference,

The other thing is all the players can play all positions which messes up the attribute weightings and may be causing team selection inconsistencies - it is certainly not near normal data.

I'll will analyse the other data files too, see what is going on there.

Share this post


Link to post
Share on other sites
17 minutes ago, forameuss said:

While I agree, playing devil's advocate instead, could the latter suffer from the same issue you've originally mentioned?  Is the flat rate of attributes causing players with great decision making to underperform, and those with terrible decision-making to overperform, purely because of this "unrealistic" attribute spread?  Obviously that could still be a bug, but maybe not a concerning one.  Is poor decision making leading them to rely more on their instructions than "good" decision making?  Rhetorical questions of course, and it looks like SI will take a look, they'll be able to draw more accurate conclusions.

@forameuss @EdL I found the original test too artificial , setting decisions to a fixed value and leaving other attributes as 10.

The bigger concern is the newcastle tests carried out later. These seem to be free of artificial constraints. Not sure if the lowered other attributes is an issue here too.

https://strikerless.com/2017/12/18/fm18-labs-the-final-decisions-results/

Share this post


Link to post
Share on other sites
12 minutes ago, fmnatic said:

@forameuss @EdL I found the original test too artificial , setting decisions to a fixed value and leaving other attributes as 10.

The bigger concern is the newcastle tests carried out later. These seem to be free of artificial constraints. Not sure if the lowered other attributes is an issue here too.

https://strikerless.com/2017/12/18/fm18-labs-the-final-decisions-results/

Not to myself. Only buy players with ****** decision making. :lol:

Share this post


Link to post
Share on other sites
34 minutes ago, EdL said:

So I've got a bit into the season with the original data file (each team with a different attribute maxed out) using the full match, Decisions is near the bottom of the table but not rock bottom like in the quick match.

I couple of observations - decisions has a high weighting s having this seems to be lowering the other attributes more, almost every other attribute is an 8, whereas for the most the other teams I looked most other attributes were 9, accumulatively that's a fairly big difference,

The other thing is all the players can play all positions which messes up the attribute weightings and may be causing team selection inconsistencies - it is certainly not near normal data.

I'll will analyse the other data files too, see what is going on there.

Ah yeah that might be a reason. So to really test this we would have to freeze all other attributes of players.

Share this post


Link to post
Share on other sites

 

12 minutes ago, Robioto said:

Really interesting thread... and a bit worrying.

It is an interesting thread but there’s only so far you can go when analysing raw data, at some point the original tester or SI have to stop, extrapolate a theory & then watch hundreds of full matches in the ME to pinpoint the specific instances that create the data & then analyse those.

If all that happens is chasing statistics that match real life then I’d hypothosise that all we’d end up with is a robotic ME that lacks the illusion of realistic behaviour.

Share this post


Link to post
Share on other sites
7 minutes ago, wicksyFM said:

I think i will put starting a save on hold until this potential issue is sorted

So it's not even sure there is an issue (as stated above by several people, the way of data analyzing is far from perfect) but you'll still stop playing? You do realise that this might take months if not longer, just to conclude that there's nothing wrong? :)

Share this post


Link to post
Share on other sites
14 minutes ago, Robioto said:

Really interesting thread... and a bit worrying.

Not particularly worrying IMO.

As Ed has stated above altering one attribute results in a knock effect on other attributes making them higher or lower to keep the required CA of the player.

What you can theorise from it is that the weighting for decisions is probably a little too high when compared to the effect it has within the ME.

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


  • Recently Browsing   0 members

    No registered users viewing this page.

×
×
  • Create New...